< previous page page_326 next page >

Page 326
0326-01.gif
Figure 5.49
Instruction address translation.
5.17.1 Set Associative Caches
In simpler processors the instruction timing template allows for sequentially executing the events:
d87111c01013bcda00bb8640fdff6754.gif
AG ® T ® DF (cache access),
where T represents the V ® R translation using the TLB. The IF usually is simpler:
d87111c01013bcda00bb8640fdff6754.gif
IA ® IF,
where the T is implied as part of the IA. This is possible since the IA generates an address outside the current page only infrequently (even on most branch target addresses, 80% lie within a 4KB page). Thus, in IA the result need only be checked, perhaps during IF, to ensure that it lies within the same (previously translated) page (Figure 5.49).
The DF is a different case, as the result of the data AG frequently lies outside its previous page. Thus, for DF the ordinary process is:
d87111c01013bcda00bb8640fdff6754.gif
AG ® T ® DF.
So long as the cache size is equal to or less than the page size, we have (set associative and direct mapped)
d87111c01013bcda00bb8640fdff6754.gif
AG ® T/DF,
since the bits needed to address the cache line (index bits) are those (lower-order) bits that do not need translation. The tag bits require the TLB access but the cache data can be accessed at the same time as the TLB access.
For direct-mapped caches larger than the page size, translated bits are needed for the index and hence to address a line in the cache.
In some processors, set associative caches of high degree have been used to allow parallel access to cache and TLB. Figure 5.50 shows the virtual-to-real address translation process. The virtual page offset identifies a byte in a page and, as mentioned earlier, is unaffected by TLB and translation. In the

 
< previous page page_326 next page >