|
|
|
|
|
|
With a write-through policy, all we need be concerned with is the accessing of the missed line. The replaced line is simply discarded (written over). The access of the missed line may begin either at the start of the line (on a line address boundary) or at the faulted word (the word address that created the read-miss). This second approach is sometimes called fetch bypass or wraparound load. In the first approach, the miss time consists of an access time to the first word and then (presuming sufficient memory interleaving) the time required to transmit the remaining L - 1 words in the line across the memory bus (L, the number of physical words per line). Processing resumes when the entire line has been transmitted. In the second approach, processing resumes as soon as the first word has been accessed and forwarded to the processor. The remaining words of the line are stored in the cache while the processor has resumed processing based upon the initial returned word. This can result in contention for the cache, as both the processor and the memory may simultaneously wish to access the cache. (In this case the memory usually gets priority.) If another miss arises while the first miss is being processed, the first miss must complete before the second miss can begin its access. |
|
|
|
|
|
|
|
|
For a copyback policy, the situation is slightly more complicated. We must first determine whether the line to be replaced is dirty (has been written to) or not. If the line is clean, then we have the same choices as we had with the write-through cache. However, if the line is dirty, we must make provision for the replaced line to be written back to memory. The simplest strategy here would be to first select the line to be replaced, then, if it is dirty, write the line back to memory, and finally bring the missed line into the cache, resuming processing when that line has been completely written to the cache. In order to speed up this process, a write buffer can be introduced that allows the replaced line to be written into the buffer during the time that the fetched line is being accessed from main memory. This frees up space in the cache to store the fetched line. Of course, one can include fetch bypass with the write buffer in order to minimize the miss time penalty. |
|
|
|
|
|
|
|
|
Potentially, the fastest approach is the nonblocking cache or prefetching cache. This approach is applicable in both write-through and copyback caches. In this approach, the cache has additional control hardware to allow the cache miss to be handled (or bypassed) while the processor continues to execute. Clearly, this strategy only works when the miss is accessing cache data that is not currently required by the processorthe processor is not immediately dependent on the line to be accessed. Thus, nonblocking caches should be used with compilers that provide adequate prefetching of lines in anticipation of processor use. The value or effectiveness of nonblocking caches depends on two factors: |
|
|
|
|
|
|
|
|
1. The effectiveness of the prefetch and the adequateness of the buffers to hold the prefetch information. The longer the prefetch is made before expected use, the less the miss delay; but this also means that the buffers or registers are more occupied with anticipated data and hence are less available for (possible) current requirements. |
|
|
|
|
|