Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
74 views66 pages

Unit 4 Memory Hierarchy

Uploaded by

Krishil Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views66 pages

Unit 4 Memory Hierarchy

Uploaded by

Krishil Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Unit – 4

Memory Hierarchy
Memory Hierarchy :
• Device to store content in computer system.
• We don’t use single memory in computer, instead we use a memory hierarchy.
• Why need of memory hierarchy ?
• The reason having 2 or 3 level of memory hierarchy is Performance issues &
economics.

Goal of Memory Hierarchy :


1. To maximize the access speed
2. To minimize the per bit storage cost

• Not all accumulated information needed by the processor at same time.


• Most economical to use low-cost storage device – for backup - Not currently used
by CPU
Memory Hierarchy :

Fig - Memory Hierarchy in Computer System


Arrows show direction of increment

Memory Hierarchy : Access


Speed
Size
Per Bit
Storage Cost

Register , Cache
Internal

RAM, ROM
Main Memory

HDD,
Magnetic Disc Secondary Memory

Tape Drive
Tertiary Memory
Access Frequency of
Time access from CPU
Memory Hierarchy :
• The memory unit that communicates directly with the CPU is called
the Main memory.
Ex – RAM & ROM
• Devices that provide backup storage are called Auxiliary Memory.
Ex – Magnetic disks & Tapes
• Only programs and data currently needed by the processor resides in
main memory.
• Other information stored in auxiliary memory.
• Total memory capacity of computer can be visualised as being a
hierarchy of components.
Memory Hierarchy :
• The slow but high capacity – Auxiliary memory
• To relative faster – Main Memory
• Even Smaller & Faster – Cache memory accessible to the high
processing logic.

• Main Memory occupies a central position - able to direct


communicate with CPU & Auxiliary memory through an I/O Processor.
• CPU need - program & Data transfer Auxiliary to Main memory
• CPU not need - program & Data transfer Main memory to Auxiliary.
Memory Hierarchy :
• Cache Memory – A special very high speed memory called cache
memory.
• Increase the speed of processing.
• To compensate for the speed differential between main memory
access time & processor logic.
• Small Cache between CPU and Main Memory.
• Whose access time is close to processor logic clock cycle time.
• I/O Processor – manage data transfer between Auxiliary to MM.
• Cache organization – transfer of information between MM to CPU.
• Each is involved with different level in memory hierarchy.
Average cost of Memory :
Memory Size In Bits Per Bit Storage Cost Cost of a Memory
M1 S1 C1 S1*C1
M2 S2 C2 S2*C2
M3 S3 C3 S3*C3

TOTAL S1+S2+S3 S1*C1+S2*C2+S3*C3

Total Cost
Average Per bit Memory Cost =
Total Size

S1*C1+S2*C2+S3*C3
= S1+S2+S3
Memory Representation :
• Memory represent by = No. of Cell * 1 Cell Capacity

0
Memory 1) How many Cell you have?
1 Cell
2 2) Capacity of Each Cell?
3 3) What is the total capacity?
4
5 Total = 8*8bits = 64-bit storage capacity
6
7 Each Cell Capacity
is Same
8-Bits
Capacity
Memory Representation :
• How to access each & every cell uniquely?
• CPU wants to perform some operation on this chip how to identify
uniquely any cell.
Address
No Cell No 8- bits This cell no is called address & write in binary number
00 0 format.
01 1
10 2 Using this address Cell is identify uniquely.
11 3
Total = 32-bit Storage Capacity
Memory Representation :
Address
No Cell No 16-bits
000 0 8 cell – 8 different address location
001 1
010 2
011 3 Total Memory = No. of Cell * 1 Cell Capacity
100 4 = No of Memory * Bits per
101 5 Location Location
110 6
111 7

Total = 8*16-bit Storage Capacity


Memory Representation :

No of cell 4 = 22 8=23 16=24 2x n

Address size 2-bits 3-bits 4-bits X-bits Log2n bits

128 * 8-bits If 4GB Memory Then 4G*1B


= 22*230 * 1B
128*1B = 232 B

128B So address line = 32 bits

Byte addressable memory means store 1 Byte at each add.


In same question, if at each location 16-bits can be stored, then total memory
capacity is how much? ( 216 * 16 bits = 216 * 2B = 128 KB)
Main Memory :
• Used to store current running program (instructions) and their data.
• The main memory is the central storage unit in a computer system.
• Primary memory holds only those data and instructions on which
computer is currently working.
• It has limited capacity and data is lost when power is switched off.
• It is generally made up of semiconductor device.
• These memories are not as fast as registers.
• The data and instruction required to be processed reside in main memory.
• Two types of Main Memory :

1. RAM – Random Access Memory


2. ROM – Read Only Memory

RAM : Those programs whose are execute (running) currently stored in


RAM. So why ROM is in main memory?

Characteristic : ROM – Non Volatile


RAM – Volatile → When power cut then data flush out.
• When you turn your computer from where can you take the
instruction?
• CPU will first execute the initial program from ROM.
• The bootstrap loader is a program whose function is to start the
computer software operating when power is turned on.

• CPU execute the instruction – 2 operation perform

1. Hardware check (POST – power on self test)


2. Booting Process – Bootstrap program
OS form secondary memory to RAM
• Types of RAM – SRAM & DRAM
Static RAM (SRAM) Dynamic RAM (DRAM)
Made up of : Flip-flop Made up of : Capacitor
Bits are storage in voltage form Bits are storage in form of electric energy
No Refresh Periodic refresh is required to keep content.
(During refresh time R/W can’t be
performed)
Faster Access speed Slower Access speed
More Expensive Less Expensive
Ex – Cache Memory – Level 1 & 2 Ex – Main Memory (DDR RAM)
Power consumption – Less Power consumption – More
Physical placement – on Processor OR Physical placement – Motherboard
between processor and main memory
Locality of Reference :
• If CPU has requested one address from memory, then particular
address or nearby address will be accessed soon.
• Locality : Neighbourhood – Nearer to you – Nearer places
• Reference : Which Reference – CPU reference to Main memory
• Two types of Locality of Reference :
1. Temporal - According to time (Same location access at diff. time)
2. Spatial – According to space (Near location Access)
Use : To improve the performance of the system – Use of cache memory
Cache Memory :

If the active portions of the program and data are placed in a fast
small memory, the average memory access time can be reduced.
Cache Memory :
• Use of cache reduces average memory access time.
• If the active portions of the program and data are placed in a fast small memory,
the average memory access time can be reduced, thus reducing the total
execution time of the program. Such a fast small memory is referred to as a cache
memory.
• Cache is a fast small capacity memory that should hold those information which
are most likely to be accessed.
• The basic operation of the cache is, when the CPU needs to access memory, the
cache is examined.
• If the word is found in the cache, it is read from the fast memory. If the word
addressed by the CPU is not found in the cache, the main memory is accessed to
read the word.
• The performance of cache memory is frequently measured in term of a
quantity called hit ratio.
• When the CPU refers to memory and finds the word in cache, it is said to
produce a hit.
• If the word not found in cache, it is in main memory and it count as a miss.

• The ratio of the number of hits divided by the total references to memory
(hits plus misses)is the hit ratio (H).

No. of cache hits No. of cache hits


H= Total References
H= Total hits + Total Miss

Miss ratio = (1-H)


Average memory access time:
Tavg = H * Mem. access time for each hit +
(1-H) * Mem. access time for each miss
Example:
Suppose CPU referred memory 200 times. Among those we had hit 160 times &
miss 40 times. Each hit requires 10 nsec & Each miss requires 100 nsec. Find out
average memory access time.

Total hit times = 160 * 10 = 1600 ns


Total hit times = 40 * 100 = 4000 ns
Total references time = 5600 ns

Average one memory access time = 5600 / 200 = 28 ns


Hit H = 160 /200 = 0.8
Miss = (1-H) = 1- 0.8 = 0.2

Tavg = H * Mem access time for each hit +


(1-H) * Mem access time for each miss

Tavg = 0.8 * 10 + 0.2 * 100

= 28 ns
Tavg = Average memory access time
Types of Cache Accesses: H = Cache hit ratio
M = Cache miss ratio
tcm = cache memory access time
1. Simultaneous Access (Parallel)
tmm = main memory access time

Tavg = H * tcm + (1-H) tmm


Types of Cache Accesses:
2. Hierarchical Access (Serial)

Tavg = H * tcm + (1-H) (tcm + tmm )


= H tcm + tcm –H tcm + (1-H) tmm
= tcm + (1-H) tmm
• In case of cache miss, a block (which contains demanded byte) is
copied from main memory to cache.
• The transformation of data from main memory to cache memory is
referred to as a mapping process.
• 3 types Cache Mapping Techniques:
1. Associative mapping
2. Direct mapping
3. Set - Associative mapping
Associative mapping :
• Consider the main memory can store 32K words of 12 bits each.
• The cache is capable of storing 512 of these words at any given time.
• For every word stored in cache, there is a duplicate copy in main
memory.
• The CPU communicates with both memories.
• It first sends a 15-bit address to cache. If there is a hit, the CPU
accepts the 12-bit data from cache, if there is miss, the CPU reads the
word from main memory and the word is then transferred to cache.
• The associative memory stores both the address and content (data)
of the memory word.
• This permits any location in cache to store any word from main
memory.
• Shows in fig 12-11 three words presently stored in the cache.

• The address value of 15 bits is shown as a five-digit octal number and


its corresponding 12-bit word is shown as a four-digit octal number.

• A CPU address of 15 bits is placed in the argument register and the


associative memory is searched for a matching address.
• If the address is found the corresponding 12-bit data is read and sent
to CPU.
• If no match occurs, the main memory is accessed for the word.
• The address data pairs then transferred to the associative cache
memory.
• If the cache is full, an address data pair must be displaced to make
room for a pair that is needed and not presently in the cache.
• This constitutes a first-in first-out (FIFO) replacement policy.
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
128 KB 16KB 256 B 9 bits (9 * 64 ) bits 64
32 GB 32 KB 1 KB 25 bits (25 * 32 ) bits 32
128 MB 512 KB 1 KB 17 BITS (17 * 512 ) bits 512
16 GB 4 KB 22 bits Can not get
64 MB 64 KB 10 bits Can not get
512 KB 7 bits

MM = Physical Address MM = 17 bits

Block No Block Offset Block No Block Offset


9 bits = tag size 8 bits
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
128 KB 16KB 256 B 9 bits (9 * 64 ) bits 64

MM MM =128K = 17 bits

Block No Block Offset (B Size) Block No Block Offset


17-8 = 9 bits = tag size 8 bits

Cache size = 16 KB = 2 14 Cache size = 2 14


Block size = 256 B = 2 8 Block size = 2 8
= 2 6 = 64 line in cache
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
32 GB 32 KB 1 KB 25 bits (25 * 32 ) bits 32

MM MM = 35 bits

Block No Block Offset Block No Block Offset


35-10 = 25 bits = tag size 10 bits

Cache size = 32 KB = 2 15 Cache size = 2 15


Block size = 1 KB = 2 10 Block size = 2 10
= 2 5 = 32 line in cache
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
128 MB 512 KB 1 KB 17 Bits (17 * 512 ) bits 512

MM = 17+10 = 27 bits = 2 27 = 128 MB


MM

Block No Block Offset Block No Block Offset


17 bits = tag size 10 bits

Cache size = 512 KB = 2 19 Cache size = 2 19


Block size = 1 KB = 2 10 Block size = 2 10
= 2 9 = 512 line in cache
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
16 GB X 4 KB 22 bits Can not get X

MM = 2 34 = 34 bits = 16 GB
MM

Block No Block Offset Block No Block Offset


34- 12 = 22 bits = tag size 4K = 12 bits
Associative Mapping Example :
MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
64 MB x 64 KB 10 bits Can not get x

MM = 2 26 = 26 bits = 64MB
MM

Block No Block Offset Block No Block Offset


26 - 16 = 10 bits = tag size 64KB = 16 bits
Direct mapping :
• The CPU address of 15 bits is divided into two fields.
• The nine least significant bits constitute the index field and the remaining six bits
from the tag field.
• The figure shows that main memory needs an address that includes both the tag
and the index.
• The number of bits in the index field is equal to the number of address bits
required to access the cache memory.
• The internal organization of the words in the cache memory is as shown in figure

Figure Direct mapping cache organization


• Each word in cache consists of the data word and its associated tag.
• When a new word is first brought into the cache, the tag bits are stored alongside the
data bits.
• When the CPU generates a memory request the index field is used for the address to
access the cache.
• The tag field of the CPU address is compared with the tag in the word read from the
cache.
• If the two tags match, there is a hit and the desired data word is in cache.
• If there is no match, there is a miss and the required word is read from main memory.
• It is then stored in the cache together with the new tag, replacing the previous value.
• The word at address zero is presently stored in the cache (index = 000, tag = 00, data
=1220).
• Suppose that the CPU now wants to access the word at address 02000.
•The index address is 000, so it is used to access the cache. The two tags are then
compared.
• The cache tag is 00 but the address tag is 02, which does not produce a match.
• Therefore, the main memory is accessed and the data word 5670 is transferred to the
CPU.
• The cache word at index address 000 is then replaced with a tag of 02 and data of
5670.
• The disadvantage of direct mapping is that two words with the same index in their
address but with different tag values cannot reside in cache memory at the same
time.

• This direct-mapping example just described uses a block size of one word.
• The same organization but using a block size of 8 words shown in next figure.
Cache size – 512

1 block size = 8 word

No. of block = 512 / 8


= 64
• Index field is divided into 2 parts (1) Block Field = 6-bit field
(2) Word Field = 3-bit field

• The tag field stored within cache is common to all 8 words of the same block.

• Every time a miss occurs, an entire block of 8 words must be transferred from main
memory to cache memory.

• Although this take extra time, the hit ratio will most likely improve with a larger block
size because of the sequential nature of computer programs.
CM 7 4
Discuss direct cache mapping technique.
Complete missing parameter in below table using direct cache mapping technique, assume memory is byte addressable.

Direct cache Mapping Example :

MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
128 KB 16KB 256 B ----- -----
32 GB 32 KB 1 KB ----- -----
----- 512 KB 1 KB 17 BITS -----
16 GB ----- 4 KB ----- -----
64 MB ----- 64 KB ----- -----
----- 512 KB ----- 7 bits -----
MM = Physical Address

Block No Block Offset


Discuss direct cache mapping technique.
Complete missing parameter in below table using direct cache mapping technique, assume memory is byte addressable.

Direct cache Mapping Example :

MM Size Cache Size Block Size Tag Size Tag directory Comparator
Size
128 KB 16KB 256 B 9 bits (9 * 64 ) bits No
32 GB 32 KB 1 KB 25 bits (25 * 32 ) bits
128 MB 512 KB 1 KB 17 BITS (17 * 512 ) bits
16 GB 4 KB 22 bits Can not get
64 MB 64 KB 10 bits Can not get
512 KB 7 bits
MM = Physical Address

Block No Block Offset


Set - Associative mapping :
• A third type of cache organization, called set associative mapping in that each word of
cache can store two or more words of memory under the same index address.
• Each data word is stored together with its tag and the number of tag-data items in one
word of cache is said to form a set.
• An example one set-associative cache organization for a set size of two is shown in figure

Figure : Two-way set-associative mapping cache


• Each index address refers to two data words and their associated terms.
• Each tag required six bits and each data word has 12 bits, so the word length is 2
(6+12) = 36 bits.
• An index address of nine bits can accommodate 512 words.
• Thus the size of cache memory is 512 × 36. It can accommodate 1024 words or
main memory since each word of cache contains two data words.
• In generation a set-associative cache of set size k will accommodate K word of
main memory in each word of cache.
• The octal numbers listed in figure are with reference to the main memory
contents.
• The words stored at addresses 01000 and 02000 of main memory are stored in
cache memory at index address 000.
• Similarly, the words at addresses 02777 and 00777 are stored in cache at index
address 777.
• When the CPU generates a memory request, the index value of the address is
used to access the cache.

• The tag field of the CPU address is then compared with both tags in the cache to
determine if a match occurs.

• The comparison logic is done by an associative search of the tags in the set similar
to an associative memory search: thus the name "set-associative”.

• When a miss occurs in a set-associative cache and the set is full, it is necessary to
replace one of the tag-data items with a new value.

• The most common replacement algorithms used are: random replacement, first-
in first out (FIFO), and least recently used (LRU).
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag Set – Comparator
directory Associative
Size way
128KB 16KB 256 KB 4 (4* 64) bits 2-way 2

MM
4 –way set associative mem means no of
Block No line in each set is 4. (4 line in each set)
Block Offset

Tag Set No
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag Set – Comparator
directory Associative
Size way
128KB 16KB 256 B 4 (4* 64) bits 2-way 2 * 4 bits

MM = PA = 17-bits

1 set contain 2-line


Block No Block Offset Set no
17-5-8 = 4 5 256 B = 8 bits Sets = 2 6 / 2 = 2 5 = 32
Tag Set No

Cache size = 2 14
Lines= Block size = 2 8
= 2 6 = 64
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag directory Set – Comparator
Size Associative
way
32 GB 32KB 1 KB 22 (22* 32) bits 4-way 4* 22 bits

MM = PA = 35-bits

1 set contain 4-line


Block No Block Offset Set no
35-3-10 = 22 3 1 KB = 10 bits Sets = 2 5 / 4 = 2 3 = 8
Tag Set No

Cache size = 2 15
Lines= Block size = 2 10
= 2 5 = 32
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag directory Set – Comparator
Size Associative
way
8 MB 512 KB 1 KB 7 (7* 512) bits 8-way 8* 7 bits

MM = PA = 7+6+10 = 23-bits = 8 MB

1 set contain 8-line


Block No Block Offset Set no
7 6 1 KB = 10 bits Sets = 2 9 / 8 = 2 6 = 64
Tag Set No

Cache size = 2 19
Lines= Block size = 2 10
= 2 9 = 512
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag directory Set – Comparator
Size Associative
way
16 GB 64 MB 4 KB 10 (10* 4 KB) bits 4-way 4* 10 bits

MM = PA = 34 -bits

Block No Block Offset


10 34-10-12=12 4 KB = 12 bits
Tag Set No

Cache size = No of set * Line per set * Size of line


= 212 * 4 * 212 ( 4KB)
= 226 = 2 6 MB = 64 MB
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag directory Set – Comparator
Size Associative
way
64 MB x x 10 x 4-way 4* 10 bits

MM = PA = 26 -bits

Block No Block Offset


10
Tag Set No
Set – Associative Mapping Example:
MM Size Cache Size Block Size Tag Size Tag directory Set – Comparator
Size Associative
way
x 512 KB x 7 x 8 –way 8 * 7 bits

MM =

Block No Block Offset


10
Tag Set No
Writing into Cache :

(1) Write Through

(2) Write-Back (Copy-Back)


Writing into Cache :
• 2 Types → (1) Write Through (2) Write-Back (Copy-Back)

• The simplest and most commonly used procedure is to update main memory with
every memory write operation.
• The cache memory being updated in parallel if it contains the word at the specified
address. This is called the write-through method.
• This method has the advantage that main memory always contains the same data as
the cache.
• This characteristic is important in systems with direct memory access(DMA) transfers.
• It ensures that the data residing in main memory are valid at all times so that an I/O
device communicating through DMA would receive the most recent updated data.
Writing into Cache :
• 2 Types → (1) Write Through (2) Write-Back (Copy-Back)

• The second procedure is called the write-back method.


• In this method only the cache location is updated during a write operation.
• The location is then marked by a flag so that later when the word is removed
from the cache it is copied into main memory.
• The reason for the write-back method is that during the time a word resides in
the cache, it may be updated several times.
• However, as long as the word remains in the cache, it does not matter whether
the copy in main memory is out of date, since requests from the word are filled
from the cache.
• It is only when the word is displaced from the cache that an accurate copy need
be rewritten into main memory.
Cache Initialization : (Problem of Initialization)
• Cache is initialized when power on or main memory is loaded from
Auxiliary memory.
• After initialized the cache is considered to be empty, but it contains
some non-valid data.
• To include with each word in cache a valid bit to indicate whether or
not the contains valid data.
• The cache is initialized by clearing all the valid bits to 0.
• The cache word is set to 1 the first time word is loaded from main
memory & stays set unless the cache has to initialized again.
• The valid bit set 1 means valid data.
• The valid bit set 0 means invalid data – so replaces word.
LRU Page Replacement Algorithm :

This algorithm stands for "Least recent used" and this algorithm helps the OS to
search those pages that are used over a short duration of time frame.
•The page that has not been used for the longest time in the main memory will
be selected for replacement.
•This algorithm is easy to implement.
•This algorithm makes use of the counter along with the even-page.

Advantages of LRU
•It is an efficient technique.
•With this algorithm, it becomes easy to identify the faulty pages that are not
needed for a long time.
•It helps in Full analysis.

Disadvantages of LRU
•It is expensive and has more complexity.
•There is a need for an additional data structure.
Optimal Page Replacement Algorithm
This algorithm mainly replaces the page that will not be used for the longest time in the
future. The practical implementation of this algorithm is not possible.
•Practical implementation is not possible because we cannot predict in advance those pages
that will not be used for the longest time in the future.
•This algorithm leads to less number of page faults and thus is the best-known algorithm
Also, this algorithm can be used to measure the performance of other algorithms.

Advantages of OPR
•This algorithm is easy to use.
•This algorithm provides excellent efficiency and is less complex.
•For the best result, the implementation of data structures is very easy

Disadvantages of OPR
•In this algorithm future awareness of the program is needed.
•Practical Implementation is not possible because the operating system is unable to track the
future request
Memory Interleaving : Memory Module

• A single memory module cause sequentialization of a access.


Eg. Only one memory access can be performed at a time.
Hence, throughput may be drop.

• Memory interleaving is a technique to increase the throughput.


• Here, the system is divided into number of independent modules, which answers read
or write requests independently in parallel.

• How it is done?

• By spreading memory address evenly across memory modules.


• By interleaving the address space (memory address) such that consecutive words in the
address space are assigned to different memory modules.
MM : 00 MM : 01
0000 10
0001 00 10 00 50
20
0010 01 20 01 60
30
0011 10 30 10 70
40
0100 11 40 11 80
50
0101 60
0110 70
MM : 10 MM : 11
0111 80
00 90 00 130
1000 90
01 100 01 140
1001 100
10 110 10 150
1010 110
11 120 11 160
1011 120
1100 130
1101 140
1110 150 Example : 4-way interleaved memory (Divide memory into 4 module)
1111 160
• Memory module is a memory array together with its address & data register.
• Instead of using two memory buses for simultaneous access, the memory can be
partitioned into a number of modules connected to a common memory address and
data buses.

MAR MDR

Memory Module

Multiple module memory organization


• The modular system permits one module to initiate a memory access while other
modules are in the process of reading or writing a word and each module can honor a
memory request independent of the state of the other modules.

• In an interleaved memory, different sets of addresses are assigned to different


memory modules.

Advantage :
System performance is enhanced because read & write activity occurs
simultaneously across the multiple modules in a similar fashion.
• There are 2 address formats for memory interleaving the address space :

1. High-order Interleaving :
It uses the high-order bits as module address & the lower-order bits as the word
address within each module.

H (MSB) L (LSB) MM : 00
00 10
01 20
10 30
11 40

2. Low-order Interleaving :
It uses the lower-order bits as module address & the high-order bits as the word
address within each module.

You might also like