|
|
|
|
|
|
|
1.8.2 Overlapped and Pipelined Processors |
|
|
|
|
|
|
|
|
In our well mapped machine of the preceding section, instruction execution proceeded sequentially by cycle, it is possible to speed up instruction execution, and hence improve performance, by overlapping the execution of instructions. Once an instruction has, for example, been decoded and begins to generate a data address, we can begin fetching the next instruction. |
|
|
|
|
|
|
|
|
In the limit we could begin fetching a new instruction each cycle, in a pipeline or production line fashion. |
|
|
|
|
|
|
|
|
Machines that concurrently execute a small number (say, 2 or 3) of instructions are called overlapped machines. Machines that execute multiple instructions by fetching and/or decoding a new instruction each cycle are called pipelined machines. Much of the remainder of this book is concerned with pipelined machines. |
|
|
|
|
|
|
|
|
The instruction set, including the data types that it describes, is an important basis for the implementation of any machine. Efficient implementations of the instruction set take into account the objects and actions described by the instructions. |
|
|
|
|
|
|
|
|
An instruction consists of format, operation, addresses, and sequence control information. This information may be either represented implicitly or explicitly encoded into the instruction, creating a myriad of possible instruction set combinations. For a variety of reasons, most modern machines are based upon general-purpose register sets that contain many of the data arguments used by the instructions. For purposes of assessing code density, cache effectiveness, and general timing properties of an instruction set, we partition most modern instruction sets into one of three classes: |
|
|
|
|
|
|
|
|
1. The L/S, or load-store class, which includes most of the RISC machines. |
|
|
|
|
|