Timing analysis of block replacement algorithms on disk caches

Thumbnail Image
Issue Date
Embargo End Date
Rajamoni, R.
Bhagavathula, Ravi
Pendse, Ravi

Rajamoni, R.; Bhagavathula, R.; Pendse, R.; , "Timing analysis of block replacement algorithms on disk caches," Circuits and Systems, 2000. Proceedings of the 43rd IEEE Midwest Symposium on , vol.1, no., pp.408-411 vol.1, 2000 doi: 10.1109/MWSCAS.2000.951670


Cache memories are used to reduce the memory latency in systems. While instruction references of a CPU exhibit high temporal and spatial locality, disk references exhibit very minimal temporal and spatial locality. Owing to the fact that most of the block replacement algorithms exploit the available locality to improve cache performance, they are more effective with CPU instruction caches than with disk caches. This paper presents the results of an investigation of cache write policies and the impact of the Least Recently Used (LRU) and the Segmented LRU (SLRU) block replacement algorithms on the performance of disk caches. To obtain optimal performance at all workloads and cache sizes, an adaptive write caching policy is introduced. The adaptive write caching policy does a dynamic selection of the write policy at run time. Simulations reveal that when the cache size is less than 2 MB, caches employing adaptive write caching policy are 17% faster over caches employing write-back policy. For cache sizes of 16 MB and above the performance improvement is 9%. The performance improvement of caches employing adaptive write caching policy over caches employing write-through policy is 2.65% for cache sizes of 2 MB and is 27%, for cache sizes of 16 MB and above. The adaptive write caching policy yields optimum performance for many of the disk workloads and disk cache sizes

Table of Content
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xplore database licensed by University Libraries: http://libcat.wichita.edu/vwebv/holdingsInfo?bibId=1045954