|
|
|
| | When: Scheduling can be performed at: | | (+) Advantage | (-) Disadvantage | | Compile time | Less run time overhead | May not be fault tolerant Compiler lacks stall information | | Run time | More efficient execution | Higher overhead |
|
|
| How: Scheduling can be performed by: | | Arrangement | Comment | | Designated single processor | Simplest, least effort | | Any single processor | ¯ | | Multiple processors | Most complex, potentially most difficult |
|
|
|
|
|
|
3. Dynamic execution control. |
|
|
|
|
|
|
|
|
4. Dynamic data management. |
|
|
|
|
|
|
|
|
Dynamic execution control is a provision for dynamic clustering or process creation at run time. Dynamic data management provides for the assignment of tasks and processors in such a way as to minimize the required amount of memory overhead delay in accessing data. |
|
|
|
|
|
|
|
|
The overhead during scheduling is primarily a function of two specific program characteristics: |
|
|
|
|
|
|
|
|
1. Program dynamicity is a measure of how the program's concurrent behavior changes with respect to differing input data (Figure 8.6). On the one hand, a program may be fully static, where its behavior is the same across all data inputs. In this case, relevant scheduling information can be determined at compile time. On the other hand, we have a fully dynamic programming structure, where the behavior of the program with different data inputs differs significantly in both the time required to execute and the memory size required during execution. Most programs, of course, fall between these two extremes. |
|
|
|
|
|
|
|
|
2. Granularity is the size of the basic program sub-task to be scheduled. Since additional run-time overhead in the scheduling process can be amortized over the execution of the entire scheduled task, large-grain concurrency affords less scheduling overhead than small-grain concurrency. |
|
|
|
|
|
|
|
|
8.3.1 Run-Time Scheduling Techniques |
|
|
|
|
|
|
|
|
Many techniques may be used for run-time scheduling, depending on the amount of available program information. The following techniques are generally arranged according to complexity. Each technique introduced also uses methods of its predecessors. |
|
|
|
|
|
|
|
|
1. System load balancing uses only the number of available concurrent processors for task scheduling. The objective is to balance the systems load. The scheduling process consists merely of dispatching the |
|
|
|
|
|