|
|
|
|
|
|
Figure 5.8
Address partitioned by cache usage. |
|
|
|
|
|
|
|
|
into the same location in the cache directory, the upper line address bits (tag bits) must be compared with the directory address to ensure a hit. If a comparison is not valid, the result is a cache miss, or simply a miss. The advantage of the direct mapped cache is that a reference to the cache array itself can be made simultaneously with the access of the directory, with minimum control overhead. |
|
|
|
|
|
|
|
|
The address given to the cache by the processor actually is subdivided into several pieces, each of which has a different role in accessing data. |
|
|
|
|
|
|
|
|
Suppose we have a processor address partitioned as in Figure 5.8. The most significant bits that are used for comparsion (with the upper portion of a line address contained in the directory) are called the tag. |
|
|
|
|
|
|
|
|
The next filed of the address is called the index and it represents the bits used to address a line entry in the cache directory. The tag plus the index represent the line address in memory. |
|
|
|
|
|
|
|
|
The next field is the offset and it represents the address of a physical word within a line. |
|
|
|
|
|
|
|
|
Finally, the least significant address field specifies a byts in a word. These bits are usually of no interest to the cache, since the cache always references a word. (An exception arises in the case of a write that modifies only a part of a word.) |
|
|
|
|
|
|
|
|
The set associative cache operates in a fashion somewhat similar to the direct-mapped cache. Bits from the line address are used to address a cache directory. However, now there are multiple choices: two, four, or more complete line addresses may be present in the directory. Each of these line addresses corresponds to a location in a sub-cache. The collection of these sub-caches forms the total cache array. In a set associative cache, as in the direct-mapped cache, all of these sub-arrays can be accessed simultaneously, together with the cache directory. If any of the entries in the cache directory match the reference address, and there is a hit, that particular sub-cache array is selected and outgated back to the processor. While selection in the outgating process adds somewhat to the cache access time, the set associative cache access time is generally better than that of the associative mapped cache. Still, from an access time consideration alone, the direct-mapped cache provides the fastest processor access to cache data for any given size cache. |
|
|
|
|
|