Memory Organization
Prepared by
G.S.Suresh Kumar,M.Tech,B.Tech(ph.D)
CONTENTS
1. Basic Concepts
2. Semiconductor RAM
3. ROM
4. Speed, Size and cost
5. Cache memories
6. Improving cache performance,
7. Virtual memory, Memory management requirements
8. Associative memories, Secondary storage devices
Introduction
• Programs and the data
• Execution speed of programs is highly dependent on the speed with which
instructions and data can be transferred between the processor and the
memory.
• It is also important to have sufficient memory to facilitate execution of large
programs having large amounts of data.
• Ideally, the memory would be fast, large, and inexpensive. Unfortunately, it is
achieved at increased cost.
Basic Concepts
• The maximum size of the memory that can be used in any computer is
determined by the addressing scheme.
• For example, a computer that generates 16-bit addresses is capable of
addressing up to 216 = 64K (kilo) memory locations.
• Machines whose instructions generate 32-bit addresses can utilize a memory
that contains up to 232 = 4G (giga) locations, whereas machines with 64-bit
addresses can access up to 264 = 16E (exa) ≈ 16 × 1018 locations.
Basic Concepts
• The number of locations represents the size of the address space of the
computer.
• The memory is usually designed to store and retrieve data in word-length
quantities. Consider, for example, a byte-addressable computer whose
instructions generate 32-bit addresses.
• When a 32-bit address is sent from the processor to the memory unit, the high
order 30 bits determine which word will be accessed. If a byte quantity is
specified, the low-order 2 bits of the address specify which byte location is
involved.
Basic Concepts
Basic Concepts
• A useful measure of the speed of memory units is the time that
elapses between the initiation of an operation to transfer a word of
data and the completion of that operation.
• This is referred to as the Memory access time.
• Memory cycle time, which is the minimum time delay required
between the initiation of two successive memory operations,
• A memory unit is called a random-access memory (RAM) if the access
time to any location is the same, independent of the location’s
address.
• The technology for implementing computer memories uses
semiconductor integrated circuits
Basic Concepts
Cache Memory
• The processor of a computer can usually process instructions and data faster than
they can be fetched from the main memory
• One way to reduce the memory access time is to use a cache memory
Virtual Memory
• Another important concept related to memory organization.
• With this technique, only the active portions of a program are stored in the main
memory, and the remainder is stored on the much larger secondary storage device
Basic Concepts
Block Transfers
• Data move frequently between the main memory and the cache and between
the main memory and the disk
• These transfers do not occur one word at a time.
• Data are always transferred in contiguous blocks involving tens, hundreds,
or thousands of words.
• Data transfers between the main memory and high-speed devices such as
a graphic display or an Ethernet interface also involve large blocks of data.
2. Semiconductor RAM Memories
• Semiconductor random-access memories (RAMs) are available in a wide
range of speeds.
• Their cycle times range from 100 ns to less than 10 ns.
• Main characteristics of these memories.
• Introducing the way that memory cells are organized inside a chip.
8.2.1 Internal Organization of
Memory Chips
8.2.1 Internal Organization of
Memory Chips
• Memory cells are usually organized in the form of an array, in which each cell is
capable of storing one bit of information.
• A possible organization is illustrated in Figure 8.2
• Each row of cells constitutes a memory word, and all cells of a row are connected to a
common line referred to as the word line, which is driven by the address decoder on
the chip.
• The cells in each column are connected to a Sense/Write circuit by two bit lines, and
the Sense/Write circuits are connected to the data input/output lines of the chip.
• During a Read operation, these circuits sense, or read, the information stored in the
cells selected by a word line and place this information on the output data lines.
• During a Write operation, the Sense/Write circuits receive input data and store them in
the cells of the selected word.
8.2.1 Internal Organization of
Memory Chips
• Figure 8.2 is an example of a very small memory circuit consisting of 16 words
of 8 bits each.
• This is referred to as a 16 × 8 organization.
• .The data input and the data output of each Sense/Write circuit are connected
to a single bidirectional data line that can be connected to the data lines
of a computer
• Two control lines, R/W and CS, are provided.
• The R/ (Read/ ) input specifies the required operation, and the CS (Chip
Select) input selects a given chip in a multichip memory system.
8.2.1 Internal Organization of
Memory Chips
• The memory circuit in Figure 8.2 stores 128 bits and requires 14 external
connections for address, data, and control lines.
• It also needs two lines for power supply and ground connections.
• Consider now a slightly larger memory circuit, one that has 1K (1024) memory cells.
• This circuit can be organized as a 128 × 8 memory, requiring a total of 19 external
connections.
8.2.1 Internal Organization of
Memory Chips
• Alternatively, the same number of cells can be organized into a 1K × 1 format.
• In this case, a 10-bit address is needed, but there is only one data line,
resulting in 15 external connections.
• Figure 8.3 shows such an organization
• The required 10-bit address is divided into two groups of 5 bits each to form
the row and column addresses for the cell array
8.2.1 Internal Organization of
Memory Chips
Organization of a 1K × 1 memory chip
8.2.1 Internal Organization of
Memory Chips
• A row address selects a row of 32 cells, all of which are accessed in
parallel. But, only one of these cells is connected to the external data
line, based on the column address
• Large chips have essentially the same organization as Figure 8.3, but
use a larger memory cell array and have more external connections
8.2.2 Static Memories
• Memories that consist of circuits capable of retaining their state as long as
power is applied are known as static memories.
• Figure 8.4 illustrates how a static RAM (SRAM) cell may be implemented.
• Two inverters are cross-connected to form a
latch.
• The latch is connected to two bit lines by
transistors T1 and T2
8.2.2 Static Memories
• Two inverters are cross-connected to form a latch. The latch is
connected to two bit lines by transistors T1 and T2
• These transistors act as switches that can be opened or closed under
control of the word line.
• When the word line is at ground level, the transistors are turned off
and the latch retains its state.
• For example, if the logic value at point X is 1 and at point Y is 0, this
state is maintained as long as the signal on the word line is at ground
level. Assume that this state represents the value 1.
8.2.2 Static Memories
Read Operation
• In order to read the state of the SRAM cell, the word line is activated to close switches T1 and T2.
• If the cell is in state 1, the signal on bit line b is high and the signal on bit line b′ is low.
• The opposite is true if the cell is in state 0. Thus, b and b′ are always complements of each other.
• The Sense/Write circuit at the end of the two bit lines monitors their state and sets the
corresponding output accordingly
8.2.2 Static Memories
Write Operation
• During a Write operation, the Sense/Write circuit drives bit lines b and b′ , instead
of sensing their state.
• It places the appropriate value on bit line b and its complement on b′ and
activates the word line.
• This forces the cell into the corresponding state, which the cell retains when the
word line is deactivated.
CMOS Cell
• A CMOS realization of the cell in Figure is given in Figure.
• Transistor pairs (T3, T5) and (T4, T6) form the inverters in the latch
• The state of the cell is read or written as just explained.
• For example, in state 1, the voltage at point X is maintained high by
having transistors T3 and T6 on, while T4 and T5 are off.
• If T1 and T2 are turned on, bit lines b and b´ will have high and low
signals, respectively.
CMOS Cell
CMOS Cell
• Continuous power is needed for the cell to retain its state.
• If power is interrupted, the cell’s contents are lost.
• When power is restored, the latch settles into a stable state, but not
necessarily the same state the cell was in before the interruption.
• Hence, SRAMs are said to be volatile memories because their
contents are lost when power is interrupted.
CMOS Cell
• A major advantage of CMOS SRAMs is their very low power
consumption, because current flows in the cell only when the cell is
being accessed.
• Otherwise, T1, T2, and one transistor in each inverter are turned off,
ensuring that there is no continuous electrical path between Vsupply
and ground.
• Static RAMs can be accessed very quickly. Access times on the order
of a few nanoseconds are found in commercially available chips.
SRAMs are used in applications where speed is of critical concern
DYNAMIC RAMS
• Static RAMs are fast, but their cells require several transistors.
• Less expensive and higher density RAMs can be implemented with
simpler cells.
• But, these simpler cells do not retain their state for a long period, unless
they are accessed frequently for Read or Write operations.
• Memories that use such cells are called dynamic RAMs (DRAMs).
• Information is stored in a dynamic memory cell in the form of a charge
on a capacitor, but this charge can be maintained for only tens of
milliseconds.
DYNAMIC RAMS
• Since the cell is required to store information for a much longer time,
its contents must be periodically refreshed by restoring the capacitor
charge to its full value.
• This occurs when the contents of the cell are read or when new
information is written into it.
• An example of a dynamic memory cell that consists of a capacitor, C,
and a transistor, T, is shown in Figure.
• To store information in this cell, transistor T is turned on and an
appropriate voltage is applied to the bit line. This causes a known
amount of charge to be stored in the capacitor.
DYNAMIC RAMS
After the transistor is turned off, the charge
remains stored in the capacitor, but not for
long.
The capacitor begins to discharge.
This is because the transistor continues to
conduct a tiny amount of current, measured in
picoamperes, after it is turned off.
Hence, the information stored in the cell can
be retrieved correctly only if it is read before
the charge in the capacitor drops below some
threshold value
DYNAMIC RAMS
• During a Read operation, the transistor in a selected cell is turned on. A sense
amplifier connected to the bit line detects whether the charge stored in the
capacitor is above or below the threshold value.
• If the charge is above the threshold, the sense amplifier drives the bit line to the full
voltage representing the logic value 1.
• As a result, the capacitor is recharged to the full charge corresponding to the logic
value 1.
• If the sense amplifier detects that the charge in the capacitor is below the threshold
value, it pulls the bit line to ground level to discharge the capacitor fully.
• Thus, reading the contents of a cell automatically refreshes its contents. Since the
word line is common to all cells in a row, all cells in a selected row are read and
refreshed at the same time.
DYNAMIC RAMS
DYNAMIC RAMS
• A256-Megabit DRAM chip, configured as 32M × 8, is shown in Figure.
The cells are organized in the form of a 16K × 16K array.
• The 16,384 cells in each row are divided into 2,048 groups of 8,
forming 2,048 bytes of data.
• Therefore, 14 address bits are needed to select a row, and another 11
bits are needed to specify a group of 8 bits in the selected row.
• In total, a 25-bit address is needed to access a byte in this memory.
DYNAMIC RAMS
• The high-order 14 bits and the loworder 11 bits of the address
constitute the row and column addresses of a byte, respectively.
• To reduce the number of pins needed for external connections, the
row and column addresses are multiplexed on 14 pins.
• During a Read or a Write operation, the row address is applied first. It
is loaded into the row address latch in response to a signal pulse on
an input control line called the Row Address Strobe (RAS).
• This causes a Read operation to be initiated, in which all cells in the
selected row are read and refreshed.
DYNAMIC RAMS
• Shortly after the row address is loaded, the column address is applied to the address pins
and loaded into the column address latch under control of a second control line called the
Column Address Strobe (CAS).
• The information in this latch is decoded and the appropriate group of 8 Sense/Write circuits
is selected.
• If the R/W control signal indicates a Read operation, the output values of the selected
circuits are transferred to the data lines, D7−D0.
• Write operation,
The information on the D7−D0 lines is transferred to the selected circuits, then used to
overwrite the contents of the selected cells in the corresponding 8 columns.
We should note that in commercial DRAM chips, the RAS and CAS control signals are active
when low.
Hence, addresses are latched when these signals change from high to low.
DYNAMIC RAMS
• The signals are shown in diagrams as RAS and CAS to indicate this fact.
• The timing of the operation of the DRAM described above is controlled
by the RAS and CAS signals.
• These signals are generated by a memory controller circuit external to
the chip when the processor issues a Read or a Write command.
• During a Read operation, the output data are transferred to the processor
after a delay equivalent to the memory’s access time.
• Such memories are referred to as asynchronous DRAMs.
• The memory controller is also responsible for refreshing the data stored
in the memory chips, as we describe later.
SYNCHRONOUS DRAMS
• Memory technology resulted in DRAMs whose operation is
synchronized with a clock signal.
• Such memories are known as synchronous DRAMs (SDRAMs).
• Their structure is shown in Figure.
• The cell array is the same as in asynchronous DRAMs.
• The distinguishing feature of an SDRAM is the use of a clock signal,
the availability of which makes it possible to incorporate control
circuitry on the chip that provides many useful features.
FAST PAGE MODE FEATURE
Fast Page Mode When the DRAM in Figure is accessed, the contents of all
16,384 cells in the selected row are sensed, but only 8 bits are placed on
the data lines, D7−D0.
This byte is selected by the column address, bits A10−A0.
A simple addition to the circuit makes it possible to access the other bytes
in the same row without having to reselect the row.
Each sense amplifier also acts as a latch.
When a row address is applied, the contents of all cells in the selected row
are loaded into the corresponding latches.
Then, it is only necessary to apply different column addresses to place the
different bytes on the data lines.
FAST PAGE MODE FEATURE
• This arrangement leads to a very useful feature.
• All bytes in the selected row can be transferred in sequential order by applying
a consecutive sequence of column addresses under the control of successive
CAS signals.
• Thus, a block of data can be transferred at a much faster rate than can be
achieved for transfers involving random addresses.
• The block transfer capability is referred to as the fast page mode feature. (A
large block of data is often called a page.)
• It was pointed out earlier that the vast majority of main memory transactions
involve block transfers.
• The faster rate attainable in the fast page mode makes dynamic RAMs
particularly well suited to this environment.
SYNCHRONOUS DRAMS
SYNCHRONOUS DRAMS
• SDRAMs have built-in refresh circuitry, with a refresh counter to
provide the addresses of the rows to be selected for refreshing.
• As a result, the dynamic nature of these memory chips is almost
invisible to the user.
• The address and data connections of an SDRAM may be buffered by
means of registers, as shown in the figure.
• Internally, the Sense/Write amplifiers function as latches, as in
asynchronous DRAMs.
SYNCHRONOUS DRAMS
• A Read operation causes the contents of all cells in the selected row
to be loaded into these latches.
• The data in the latches of the selected column are transferred into the
data register, thus becoming available on the data output pins.
• The buffer registers are useful when transferring large blocks of data
at very high speed.
• By isolating external connections from the chip’s internal circuitry, it
becomes possible to start a new access operation while data are
being transferred to or from the registers.
SYNCHRONOUS DRAMS
• SDRAMs have several different modes of operation, which can be
selected by writing control information into a mode register.
• For example, burst operations of different lengths can be specified. It
is not necessary to provide externally-generated pulses on the CAS
line to select successive columns.
• The necessary control signals are generated internally using a column
counter and the clock signal.
• New data are placed on the data lines at the rising edge of each clock
pulse.
SYNCHRONOUS DRAMS
• Figure shows a timing diagram for a typical burst read of length 4.
• First, the row address is latched under control of the RAS signal.
• The memory typically takes 5 or 6 clock cycles (we use 2 in the figure for
simplicity) to activate the selected row.
• Then, the column address is latched under control of the CAS signal. After
a delay of one clock cycle, the first set of data bits is placed on the data
lines.
• The SDRAM automatically increments the column address to access the
next three sets of bits in the selected row, which are placed on the data
lines in the next 3 clock cycles.
SYNCHRONOUS DRAMS
SYNCHRONOUS DRAMS
• Synchronous DRAMs can deliver data at a very high rate,
• because all the control signals needed are generated inside the chip.
• The initial commercial SDRAMs in the 1990s were designed for clock
speeds of up to 133 MHz.
• As technology evolved, much faster SDRAM chips were developed.
• Today’s SDRAMs operate with clock speeds that can exceed 1 GHz.
Latency and Bandwidth
• Data transfers to and from the main memory often involve blocks of
data.
• The speed of these transfers has a large impact on the performance
of a computer system.
• During block transfers, memory latency is the amount of time it takes
to transfer the first word of a block.
• The time required to transfer a complete block depends also on the
rate at which successive words can be transferred and on the size of
the block.
Latency and Bandwidth
• The time between successive words of a block is much shorter than
the time needed to transfer the first word.
• For instance, in the timing diagram in Figure, the access cycle begins
with the assertion of the RAS signal.
• The first word of data is transferred five clock cycles later.
• Thus, the latency is five clock cycles.
• If the clock rate is 500 MHz, then the latency is 10 ns.
• The remaining three words are transferred in consecutive clock cycles,
at the rate of one word every 2 ns.
Latency and Bandwidth
A useful performance measure is the number of bits or bytes that can
be transferred in one second.
This measure is often referred to as the memory bandwidth.
It depends on the speed of access to the stored data and on the
number of bits that can be accessed in parallel.
The rate at which data can be transferred to or from the memory
depends on the bandwidth of the system interconnections.
For this reason, the interconnections used always ensure that the
bandwidth available for data transfers between the processor and the
memory is very high.
Double-Data-Rate SDRAM
• In the continuous quest for improved performance, faster versions of
SDRAMs have been developed.
• In addition to faster circuits, new organizational and operational features
make it possible to achieve high data rates during block transfers.
• The key idea is to take advantage of the fact that a large number of bits are
accessed at the same time inside the chip when a row address is applied.
• Various techniques are used to transfer these bits quickly to the pins of the
chip.
• To make the best use of the available clock speed, data are transferred
externally on both the rising and falling edges of the clock.
Double-Data-Rate SDRAM
• For this reason, memories that use this technique are called double-data-rate
SDRAMs (DDR SDRAMs).
• Several versions of DDR chips have been developed. The earliest version is
known as DDR.
• Later versions, called DDR2, DDR3, and DDR4, have enhanced capabilities.
• They offer increased storage capacity, lower power, and faster clock speeds.
• For example, DDR2 and DDR3 can operate at clock frequencies of 400 and 800
MHz, respectively.
• Therefore, they transfer data using the effective clock speeds of 800 and 1600
MHz, respectively.
Rambus Memory
The rate of transferring data between the memory and the processor is a
function of both the bandwidth of the memory and the bandwidth of its
connection to the processor.
Rambus is a memory technology that achieves a high data transfer rate by
providing a highspeed interface between the memory and the processor
One way for increasing the bandwidth of this connection is to use a wider data
path. However, this requires more space and more pins, increasing system cost.
The alternative is to use fewer wires with a higher clock speed.
This is the approach taken by Rambus.
Rambus Memory
• Rambus technology competes directly with the DDR SDRAM
technology.
• Each has certain advantages and disadvantages.
• A nontechnical consideration is that the specification of DDR SDRAM
is an open standard that can be used free of charge.
• Rambus, on the other hand, is a proprietary scheme that must be
licensed by chip manufacturers.
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
• An ideal memory would be fast, large, and inexpensive.
• It is clear that a very fast memory can be implemented using static
RAM chips.
• But, these chips are not suitable for implementing large memories,
because their basic cells are larger and consume more power than
dynamic RAM cells.
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
• Although dynamic memory units with gigabyte capacities can be
implemented at a reasonable cost, the affordable size is still small
compared to the demands of large programs with voluminous data.
• A solution is provided by using secondary storage, mainly magnetic
disks, to provide the required memory space.
• Disks are available at a reasonable cost, and they are used extensively
in computer systems.
• However, they are much slower than semiconductor memory units.
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
• In summary, a very large amount of cost-effective storage can be
provided by magnetic disks, and a large and considerably faster, yet
affordable, main memory can be built with dynamic RAM technology.
• This leaves the more expensive and much faster static RAM
technology to be used in smaller units where speed is of the essence,
such as in cache memories.
• huge amount of cost effective storage can be provided by magnetic
disk; The main memory can be built with DRAM which leaves SRAM‟s
to be used in smaller units where speed is of essence
SPEED, SIZE AND COST – REVIEW
ABOUT MEMORY HIERARCHY
CACHE MEMORIES
• The cache is a small and very fast memory, interposed between the
processor and the main memory
• Its purpose is to make the main memory appear to the processor to
be much faster than it actually is.
• The effectiveness of this approach is based on a property of computer
programs called locality of reference.
• Analysis of programs shows that most of their execution time is spent
in routines in which many instructions are executed repeatedly.
• These instructions may constitute a simple loop, nested loops, or a
few procedures that repeatedly call each other
CACHE MEMORIES
CACHE MEMORIES
• The actual detailed pattern of instruction sequencing is not important
—the point is that many instructions in localized areas of the program
are executed repeatedly during some time period.
• This behaviour manifests itself in two ways: temporal and spatial.
• The first means that a recently executed instruction is likely to be
executed again very soon.
• The spatial aspect means that instructions close to a recently
executed instruction are also likely to be executed soon.
CACHE MEMORIES
• Conceptually, operation of a cache memory is very simple.
• The memory control circuitry is designed to take advantage of the
property of locality of reference.
• Temporal locality suggests that whenever an information item,
instruction or data, is first needed, this item should be brought into
the cache, because it is likely to be needed again soon.
• Spatial locality suggests that instead of fetching just one item from
the main memory to the cache, it is useful to fetch several items that
are located at adjacent addresses as well
CACHE MEMORIES
The term cache block refers to a set of contiguous address locations of
some size.
Another term that is often used to refer to a cache block is a cache line.
Subsequently, when the program references any of the locations in this
block, the desired contents are read directly from the cache.
Usually, the cache memory can store a reasonable number of blocks at
any given time, but this number is small compared to the total number
of blocks in the main memory.
The correspondence between the main memory blocks and those in
the cache is specified by a mapping function.
CACHE MEMORIES
• When the cache is full and a memory word (instruction or data) that is
not in the cache is referenced, the cache control hardware must
decide which block should be removed to create space for the new
block that contains the referenced word.
• The collection of rules for making this decision constitutes the cache’s
replacement algorithm.
CACHE MEMORIES-Cache Hits
• The processor does not need to know explicitly about the existence of
the cache. It simply issues Read and Write requests using addresses
that refer to locations in the memory.
• The cache control circuitry determines whether the requested word
currently exists in the cache. If it does, the Read or Write operation is
performed on the appropriate cache location.
• In this case, a read or write hit is said to have occurred
CACHE MEMORIES-Cache Hits
• The main memory is not involved when there is a cache hit in a Read
operation.
• For a Write operation, the system can proceed in one of two ways.
• In the first technique, called the write-through protocol, both the
cache location and the main memory location are updated.
• The second technique is to update only the cache location and to
mark the block containing it with an associated flag bit, often called
the dirty or modified bit
CACHE MEMORIES-Cache Hits
• The main memory location of the word is updated later, when the block
containing this marked word is removed from the cache to make room for a new
block.
• This technique is known as the write-back, or copy-back, protocol.
• The write-through protocol is simpler than the write-back protocol.
• The write-back protocol also involves unnecessary Write operations, because all
words of the block are eventually written back, even if only a single word has
been changed while the block was in the cache.
• The write-back protocol is used most often, to take advantage of the high speed
with which data blocks can be transferred to memory chips.
CACHE MEMORIES-Cache Misses
A Read operation for a word that is not in the cache constitutes a Read miss.
It causes the block of words containing the requested word to be copied from
the main memory into the cache.
After the entire block is loaded into the cache, the particular word requested
is forwarded to the processor.
Alternatively, this word may be sent to the processor as soon as it is read
from the main memory.
The latter approach, which is called load-through, or early restart, reduces
the processor’s waiting time somewhat, at the expense of more complex
circuitry
CACHE MEMORIES-Cache Misses
• When a Write miss occurs in a computer that uses the write-through
protocol, the information is written directly into the main memory.
• For the write-back protocol, the block containing the addressed word
is first brought into the cache, and then the desired word in the cache
is overwritten with the new information.
CACHE MEMORIES-MAPPING
FUNCTIONS
• There are several possible methods for determining where memory
blocks are placed in the cache.
• It is instructive to describe these methods using a specific small
example.
• Consider a cache consisting of 128 blocks of 16 words each, for a total
of 2048 (2K) words, and assume that the main memory is addressable
by a 16-bit address.
• The main memory has 64K words, which we will view as 4K blocks of
16 words each. For simplicity, we have assumed that consecutive
addresses refer to consecutive words.
CACHE MEMORIES-Direct Mapping
• The simplest way to determine cache locations in which to store memory blocks is the direct-
mapping technique.
• In this technique, block j of the main memory maps onto block j modulo 128 of the cache, as
depicted in Figure.
• Thus, whenever one of the main memory blocks 0, 128, 256, . . . is loaded into the cache, it is
stored in cache block 0.
• Blocks 1, 129, 257, . . . are stored in cache block 1, and so on. Since more than one memory block
is mapped onto a given cache block position, contention may arise for that position even when
the cache is not full.
• For example, instructions of a program may start in block 1 and continue in block 129, possibly
after a branch.
• As this program is executed, both of these blocks must be transferred to the block-1 position in
the cache.
• Contention is resolved by allowing the new block to overwrite the currently resident block.