Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views10 pages

Cache

The document discusses multilevel cache organization, highlighting how it reduces miss penalty and improves CPU memory access times compared to systems without cache. It explains the concepts of Write Through and Write Back caching techniques, detailing their advantages and disadvantages in terms of data consistency and performance. Additionally, it covers the role of dirty bits in managing cache data updates and the implications of cache failures.

Uploaded by

fatima anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views10 pages

Cache

The document discusses multilevel cache organization, highlighting how it reduces miss penalty and improves CPU memory access times compared to systems without cache. It explains the concepts of Write Through and Write Back caching techniques, detailing their advantages and disadvantages in terms of data consistency and performance. Additionally, it covers the role of dirty bits in managing cache data updates and the implications of cache failures.

Uploaded by

fatima anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Cache

Multilevel Cache Organisation

• Cache is a random access memory used by the CPU to reduce the average time
taken to access memory.
• Multilevel Caches is one of the techniques to improve Cache Performance by
reducing the “MISS PENALTY”.
• Miss Penalty refers to the extra time required to bring the data into cache from the
Main memory whenever there is a “miss” in the cache.
For clear understanding let us consider an example where the CPU requires 10
Memory References for accessing the desired information and consider this
scenario in the following 3 cases of System design :
Case 1 : System Design without Cache Memory

Here the CPU directly communicates with the main memory and no caches are
involved.
In this case, the CPU needs to access the main memory 10 times to access the
desired information.
Case 2 : System Design with Cache Memory

Here the CPU at first checks whether the desired data is present in the Cache
Memory or not i.e. whether there is a “hit” in cache or “miss” in the cache. Suppose
there is 3 miss in Cache Memory then the Main Memory will be accessed only 3
times. We can see that here the miss penalty is reduced because the Main Memory is
accessed a lesser number of times than that in the previous case.
Case 3 : System Design with Multilevel Cache Memory

Here the Cache performance is optimized further by introducing multilevel Caches.


As shown in the above figure, we are considering 2 level Cache Design. Suppose
there is 3 miss in the L1 Cache Memory and out of these 3 misses there is 2 miss in
the L2 Cache Memory then the Main Memory will be accessed only 2 times. It is
clear that here the Miss Penalty is reduced considerably than that in the previous
case thereby improving the Performance of Cache Memory.
We can observe from the above 3 cases that we are trying to decrease the number of
Main Memory References and thus decreasing the Miss Penalty in order to improve
the overall System Performance. Also, it is important to note that in the Multilevel
Cache Design, L1 Cache is attached to the CPU and it is small in size but fast.
Although, L2 Cache is attached to the Primary Cache i.e. L1 Cache and it is larger
in size and slower but still faster than the Main Memory.
Write Through and Write Back in Cache

• Cache is a technique of storing a copy of data temporarily in rapidly accessible storage memory.
Cache stores most recently used words in small memory to increase the speed at which data is
accessed. It acts as a buffer between RAM and CPU and thus increases the speed at which data is
available to the processor.
• Whenever a Processor wants to write a word, it checks to see if the address it wants to write the
data to, is present in the cache or not. If the address is present in the cache i.e., Write Hit.

• We can update the value in the cache and avoid expensive main memory access. But this results
in Inconsistent Data Problem. As both cache and main memory have different data, it will cause
problems in two or more devices sharing the main memory (as in a multiprocessor system).
This is where Write Through and Write Back comes into the picture.
Write Through

• In write-through, data is simultaneously updated to cache and memory. This


process is simpler and more reliable. This is used when there are no frequent
writes to the cache(The number of write operations is less).
• It helps in data recovery (In case of a power outage or system failure). A data write
will experience latency (delay) as we have to write to two locations (both Memory
and Cache). It Solves the inconsistency problem. But it questions the advantage of
having a cache in write operation (As the whole point of using a cache was to
avoid multiple access to the main memory).
Write Back

• The data is updated only in the cache and updated into the memory at a later time. Data is
updated in the memory only when the cache line is ready to be replaced (cache line
replacement is done using Belady’s Anomaly, Least Recently Used Algorithm, FIFO,
LIFO, and others depending on the application).
Write Back is also known as Write Deferred.
• Dirty Bit: Each Block in the cache needs a bit to indicate if the data present in the cache
was modified(Dirty) or not modified(Clean). If it is clean there is no need to write it into
the memory. It is designed to reduce write operation to a memory. If Cache fails or if
the System fails or power outages the modified data will be lost. Because it’s nearly
impossible to restore data from cache if lost.
• If write occurs to a location that is not present in the Cache(Write Miss), we use two
options, Write Allocation and Write Around.

You might also like