|
|
|
|
|
|
|
8.4 Synchronization and Coherency |
|
|
|
|
|
|
|
|
In a uniprocessor, instruction ordering is ensured during program execution. In providing out-of-order or concurrent instruction execution, the hardware provides the appearance of results executed in the order of the instructions as initially laid down in the program. For programs partitioned into a parallel form for execution on a multiprocessor, the notion of ordering changes. Order is defined by the sequence in which values are placed in memory. In a multiprocessor configuration, the objective is to achieve a high degree of task concurrency, but the tasks themselves must follow an explicit order as defined by the control structure and/or the scheduler. Also, communications between active tasks must be performed in an orderly way. If task 1 is to provide argument A for use by task 2, and task 2 is to return result B to task 1, the processor executing task 2 must have a mechanism to ensure that it receives the value A when it has been properly produced by task 1. The control of these values is performed by synchronization primitives or semaphore. Synchronization is the means to ensure that multiple processors have a coherent (similar) view of critical values in memory. Memory or value coherence is the property of memory that ensures that the value returned after a read is the same as the value stored by the latest write to the same address. |
|
|
|
|
|
|
|
|
In complex systems of multiple processors, the memory may be physically remote from an individual processor. Thus, the order of issue (program order) of memory related operations (LD, ST, synchronize) may be different from the order in which the operations are actually executed. The extent to which the executed order of memory operations corresponds to the issued order is called the consistency of the ordering. |
|
|
|
|
|
|
|
|
In multiprocessors, there are different degrees to which ordering may be enforced. The strongest type of order is referred to by Lamport [177] as sequential consistency. He defines this as follows: |
|
|
|
 |
|
 |
|
|
A system is sequentially consistent if the result of any execution is the same as if the operations of all of the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program. |
|
|
|
|
|
|
|
|
As pointed out by Adve and Hill [3], "operations" can, for practical purposes, be taken to mean memory reads or writes. As they observe, the effect of sequential consistency is to ensure that (1) all memory accesses execute atomically in some total order, and (2) that all memory accesses of a single process appear to execute (be performed) in program (issued) order. Sequential consistency defines a type of strong ordering [79, 254] of events in a multiprocessor (Figure 8.9a). It is sufficient for correct program execution, but may in practice create a great deal of overhead in multiprocessor program execution. For example, suppose two processors (P1 and P2) issue successive memory operations (M11 and M12) for processor one, and M21 and M22 for processor two. Sequential consistency requires that these operations be performed in program order with respect to each processor, and in the same total order as seen by any other processor, so that if another processor saw the memory operations performed as M11, M21, M12, M22, then any other processor in the system would see the same ordering |
|
|
|
|
|