< previous page page_709 next page >

Page 709
0709-01.gif
Figure 10.16
Percent CPI improvement of implementations with secondary cache
relative to implementation without secondary cache for 1.0
m across
different line sizes.
Secondary Cache
The effect of adding secondary cache to CPI across the different line sizes, write policy, and architecture is shown for the 1.0m and 0.3m cases. These cases are selected in order to show the benefits of secondary cache as feature sizes shrink.
From the data in Figure 10.16, it is beneficial to add a secondary cache only with shorter line size. A question arises from these data: Why is the improvement negative for the baseline case at large line sizes?
This is due to the assumption that was made in the design of the secondary cache. Figure 10.16 compares the best CPI without secondary cache with the CPI implemented with a secondary cache. This negative improvement is due to the fact that the implementation with secondary cache uses a simple write buffer management scheme. As a result, for the CBWA case, Tc.miss = (1 + w) Tline.128 (we first need to write back the dirty line and wait for the entire read line to be returned prior to the resumption of processing).
Therefore, for the baseline case (which has the biggest level-1 cache and the smallest reference traffic), as the line size is increased, the best CPI (buffer management scheme 3) without secondary cache is lower than with secondary cache.
We would expect the situation to get worse for the 0.3m case, since the miss rate of the level-1 cache is now even lower (a much larger on-chip cache), and the ratio of size between the first-level and secondary caches is only four. This is shown in Figure 10.17 for the 0.3m case.
Based on these data, the addition of a secondary cache is not warranted, since the performance improvement does not exceed 20% for higher line

 
< previous page page_709 next page >