|
|
|
|
|
|
|
Chapter 8
Shared Memory Multiprocessors |
|
|
|
|
|
|
|
|
Beyond the instruction-level concurrency discussed in the last chapter, there are higher-level forms of concurrent program execution. Program subunits, whether basic blocks, procedures, tasks, or threads of control, are the basic unit of concurrency for multiprocessors. A program consists of a collection of executable sub-program units. These units, which we refer to as tasks, are also sometimes called programming grains. They must be defined, scheduled, and coordinated by hardware and software before or during program execution. |
|
|
|
|
|
|
|
|
There are many ways to arrange programs into executable grains to suit the needs of the processing ensemble that executes them. In this chapter, we consider only processing ensembles that consist of n (usually < 100) identical processors that share (at least) a common memory (or space of named objects). Recently there has been a great deal of work on machine configurations for massively parallel processors where the number of computing elements is greater than 1,000, and on processors that do not share a common memory but rather use a message-passing protocol among processors. While these are increasingly important forms of computing, we do not attempt to review them in this text, but rather consider the design of smaller-scale multiprocessors. |
|
|
|
|
|
|
|
|
Multiprocessors usually are designed for at least one of two reasons: fault tolerance and program speedup. |
|
|
|
|
|
|
|
|
If we have n identical processors, we should reasonably expect that the failure of one processor ought not to affect the ability of the multiprocessor to continue program execution, although this execution may occur at a lower speed. For some multiprocessor configurations, this is not true. If the operating system is designated to run on a particular processor and that processor fails, the system fails. On the other hand, some multiprocessor ensembles have been built with the sole purpose of high-integrity, |
|
|
|
|
|