< previous page page_403 next page >

Page 403
waiting time at the server as seen by the I/O system (Figure 6.27). We ignore this effect and compute an available service occupancy:
0403-01.gif
If the processor preempts the I/O from memory, then it only has to wait until any already started memory request is completed. This probability is ri and we estimate the effective delay as Tc/2. Thus,
0403-02.gif
If the I/O is line buffered, it may have line priority use of memorythat is, it has priority use of memory while a line is being transferred; thereafter the processor has priority. Here we estimate:
0403-03.gif
Finally, the I/O may have priority over the processor in its use of memory. Now the processor must wait until any queued I/O requests are completed. We assume the I/O consists of several independent well-buffered sources, and thus Tw has no immediate effect on the I/O request rate (open queue), unlike the case of the processor. Here, Tw is the delay seen by the I/O due to memory contention. The typical I/O transaction occupies the memory (and memory bus, if appropriate) for
0403-04.gif
where Ts is the memory service time (Tc or Tline access). Again, we estimate that the processor, when it sees I/O memory contention, observes:
0403-05.gif
where Tw can be estimated from open-queue M/D/1 (low occupancy, I/O well buffered). The MB/D/1 is inappropriate, since it models the source as making a request with Prob = 1 each memory cycle time.
In summary, the effect of I/O is to increase the access time for a line transaction by Tw-I/O, so that:
0403-06.gif
6.8.5 Performance Effects
Now suppose Tcycle is the processor cycle time (which is usually equal to Tbus). The number of cache misses per processor cycle is thus flpTcycle, while the number of cycles lost is Tmiss/Tcycle. Therefore, the number of cycles lost per CPU cycle is:
0403-07.gif

 
< previous page page_403 next page >