Memory Organisation in Computer Architecture
The memory is organized in the form of a cell, each cell is able to be
identified with a unique number called address. Each cell is able to
recognize control signals such as “read” and “write”, generated by the
CPU when it wants to read or write an address. Whenever the CPU
executes the program there is a need to transfer the instruction from the
memory to the CPU because the program is available in memory. To
access the instruction CPU generates the memory request.
Memory Request:
Memory request contains the address along with the control signals. For
Example, When inserting data into the stack, each block consumes
memory (RAM) and the number of memory cells can be determined by the
capacity of a memory chip.
What is a Computer Memory?
To save data and instructions, memory is required. Memory is divided into
cells, and they are stored in the storage space present in the computer.
Every cell has its unique location/address. Memory is essential for a
computer as this is the way it becomes somewhat more similar to a
human brain.
In human brains, there are different ways of keeping a memory, like short-
term memory, long-term memory, implicit memory, etc. Likewise, in
computers, there are different types of memories or different ways of
saving memories. They are cache memory, primary memory/main
memory, and secondary memory.
Types of Computer Memory
There are three types of memories. Cache memory helps speed up the
CPU as it is a high-speed memory, It consumes less time but is very
expensive. The next type is the main memory or primary memory which is
used to store or hold the current data, It consists of RAM and ROM, RAM is
a volatile memory while ROM is non-volatile. The third type is Secondary
memory, which is non-volatile, it is used to store data permanently in a
computer.
Types of Memory
Random Access Memory (RAM)
Random Access Memory (RAM) is a type of computer memory that stores
data temporarily while a computer is running. It’s called “random access”
because the computer can access any part of the memory directly and
quickly. RAM (Random Access Memory) is very similar to memory in the
Human Brain. The human brain’s memory is the most essential part
played by the brain. Memory helps in remembering things, and people
remember their past due to the memory present in the brain, similarly,
computers have memory too.
RAM helps your computer run programs and process information quickly.
When you turn off your computer, the data in RAM is lost, unlike data on
your hard drive which is stored permanently.
What is RAM (Random Access Memory)?
It is one of the parts of the Main memory, also famously known as Read
Write Memory. Random Access memory is present on the motherboard
and the computer’s data is temporarily stored in RAM. As the name says,
RAM can help in both Read and write. RAM is a volatile memory, which
means, it is present as long as the Computer is in the ON state, as
soon as the computer turns OFF, the memory is erased.
History of RAM
In 1947, the Williams tube marked the debut of the first RAM type. The
data was saved as electrically charged dots on the face and was used in
cathode ray tubes. A magnetic-core memory was the second type of RAM,
which was created in 1947. RAM was made of small metal rings and each
ring connected with wires. A ring stored one bit of data, and it can be
easily accessible at any time.
The RAM as solid-state memory, was invented by Robert Dennard in 1968
at IBM Thomas J Watson Research Centre. It is generally known as
dynamic random access memory (DRAM) and has many transistors to hold
or store bits of data. A constant power supply was necessary to maintain
the state of each transistor.
In October 1969, Intel launched its first DRAM, the Intel 1103. In 1993,
Samsung launched the KM48SL2000 synchronous DRAM (SDRAM).
similarly In 1996, DDR SDRAM was commercially available. In 1999,
RDRAM was accessible for computers. In 2003, DDR2 SDRAM began ready
to sold. In June 2007, DDR3 SDRAM ready to being sold. In September
2014, DDR4 became ready to be sold in the market.
How Does RAM Work?
RAM is constructed of small transistors and capacitors, much like CPUs
and other computer components, which can stored an electric charge that
corresponds to data bits. electrical charge is necessary to regular charge
of it. If not, the data removed from RAM and the capacitors lose their
charge.
Saving any modified data to the hard disc or SSD is crucial because data
can be lost so rapidly when the battery is gone. Additionally, it explains
why so many programmes include autosave options or cache unfinished
work in the event of an unplanned shutdown. Data from RAM can be
retrieved by forensic experts in some situations. However, the majority of
the time, after finishing a file or your computer shuts down, the
information in RAM is gone.
Types of RAM?
Mainly RAM have 2types
SRAM (Static RAM)
DRAM (Dynamic RAM)
RAM Block Diagram
What is SRAM?
The SRAM memories consist of circuits capable of retaining the stored
information as long as the power is applied. That means this type of
memory requires constant power. SRAM memories are used to build Cache
Memory.
SRAM Memory Cell
Static memories(SRAM) are memories that consist of circuits capable of
retaining their state as long as power is on. Thus this type of memory is
called volatile memory. The below figure shows a cell diagram of SRAM.
A latch is formed by two inverters connected as shown in the figure. Two
transistors T1 and T2 are used for connecting the latch with two-bit lines.
The purpose of these transistors is to act as switches that can be opened
or closed under the control of the word line, which is controlled by the
address decoder. When the word line is at 0-level, the transistors are
turned off and the latch remains its information. SRAM does not require
refresh time. For example, the cell is at state 1 if the logic value at point A
is 1 and at point, B is 0. This state is retained as long as the word line is
not activated.
SRAM Memory Cell
For the Read operation, the word line is activated by the address input
to the address decoder. The activated word line closes both the transistors
(switches) T1 and T2. Then the bit values at points A and B can transmit
to their respective bit lines. The sense/write circuit at the end of the bit
lines sends the output to the processor.
For the Write operation, the address provided to the decoder activates
the word line to close both switches. Then the bit value that is to be
written into the cell is provided through the sense/write circuit and the
signals in bit lines are then stored in the cell.
What is DRAM?
DRAM stores the binary information in the form of electric charges applied
to capacitors. The stored information on the capacitors tends to lose over
a period of time and thus the capacitors must be periodically recharged to
retain their usage. DRAM requires refresh time. The main memory is
generally made up of DRAM chips.
DRAM Memory Cell
Though SRAM is very fast, it is expensive because of its every cell requires
several transistors. Relatively less expensive RAM is DRAM, due to the use
of one transistor and one capacitor in each cell, as shown in the below
figure., where C is the capacitor and T is the transistor. Information is
stored in a DRAM cell in the form of a charge on a capacitor and this
charge needs to be periodically recharged.
For storing information in this cell, transistor T is turned on and an
appropriate voltage is applied to the bit line. This causes a known amount
of charge to be stored in the capacitor. After the transistor is turned off,
due to the property of the capacitor, it starts to discharge. Hence, the
information stored in the cell can be read correctly only if it is read before
the charge on the capacitors drops below some threshold value.
DRAM
Understanding the different types of RAM is crucial for grasping how
memory works in computers. RAM comes in various forms, including SRAM
and DRAM, each serving different purposes within a computer system.
Types of DRAM
There are mainly 5 types of DRAM.
Asynchronous DRAM (ADRAM): The DRAM described above is the
asynchronous type of DRAM. The timing of the memory device is
controlled asynchronously. A specialized memory controller circuit
generates the necessary control signals to control the timing.
The CPU must take into account the delay in the response of the
memory.
Synchronous DRAM (SDRAM): These RAM chips’ access speed is
directly synchronized with the CPU’s clock. For this, the memory
chips remain ready for operation when the CPU expects them to be
ready. These memories operate at the CPU-memory bus without
imposing wait states. SDRAM is commercially available as modules
incorporating multiple SDRAM chips and forming the required
capacity for the modules.
Double-Data-Rate SDRAM (DDR SDRAM): This faster version
of SDRAM performs its operations on both edges of the clock signal;
whereas a standard SDRAM performs its operations on the rising
edge of the clock signal. Since they transfer data on both edges of
the clock, the data transfer rate is doubled. To access the data at a
high rate, the memory cells are organized into two groups. Each
group is accessed separately.
Rambus DRAM (RDRAM): The RDRAM provides a very high data
transfer rate over a narrow CPU-memory bus. It uses various
speedup mechanisms, like synchronous memory interface, caching
inside the DRAM chips and very fast signal timing. The Rambus data
bus width is 8 or 9 bits.
Cache DRAM (CDRAM): This memory is a special type of DRAM
memory with an on-chip cache memory (SRAM) that acts as a high-
speed buffer for the main DRAM.
Difference Between SRAM and DRAM
The below table lists some of the differences between SRAM and DRAM.
SRAM DRAM
SRAM stands for Static Random DRAM stands for Dynamic
Access Memory. Random Access Memory.
Uses a capacitor and a transistor
Uses a flip-flop circuit to store data
to store data
SRAM has a lower access time, so it DRAM has a higher access time,
is faster compared to DRAM. so it is slower than SRAM.
SRAM has long data life. DRAM has short data life.
SRAM has a storage capacity of 1 DRAM, which is often found in
MB to 16 MB in most cases. tablets and smartphones, has a
SRAM DRAM
capacity of 1 GB to 2 GB
DRAM costs less compared to
SRAM is costlier than DRAM.
SRAM.
SRAM provides faster speed of data DRAM provides slower speed of
read/write. data read/write.
DRAM offers reduced power
SRAM requires a constant power
consumption due to the fact that
supply, which means this type of
the information is stored in the
memory consumes more power.
capacitor.
Good choice for applications that
Not suitable for such
may be exposed to extreme
applications.
temperatures.
Due to complex internal circuitry, Due to the small internal
less storage is available compared circuitry in the one-bit memory
to the same physical size of a DRAM cell of DRAM, a large storage
memory chip. capacity is available.
DRAM has a high packaging
SRAM has low packaging capacity.
density.
SRAM is used in cache memories. DRAM is used in main memories.
DRAM requires periodic refresh
SRAM does not require refresh time.
time.
SRAMs are used as cache memory DRAMs are used as main
in computer and other computing
SRAM DRAM
devices. memory in computer systems.
Read Only Memory (ROM)
In a computer system, memory is a very essential part of the computer
system and is used to store information for instant or permanent use.
Based on computer memory working features, memory is divided into two
types i.e. Volatile and Non-Volatile Memory. Before understanding ROM, we
will first understand what exactly volatile and non-volatile memory
is. Non-volatile memory is a type of computer memory that is used to
retain stored information during power is removed. It is less expensive
than volatile memory. It has a large storage capacity. ROM (read-only
memory), and flash memory are examples of non-volatile memory.
Whereas volatile memory is a temporary memory. In this memory, the
data is stored till the system is capable of, but once the power of the
system is turned off the data within the volatile memory is deleted
automatically. RAM is an example of volatile memory.
What is Read-Only Memory (ROM)?
ROM stands for Read-Only Memory. It is a non-volatile memory that is
used to store important information which is used to operate the system.
As its name refers to read-only memory, we can only read the programs
and data stored on it. It is also a primary memory unit of
the computer system. It contains some electronic fuses that can be
programmed for a piece of specific information. The information is stored
in the ROM in binary format. It is also known as permanent memory.
Block Diagram of ROM
As shown in below diagram, there are k input lines and n output lines in it.
The input address from which we wish to retrieve the ROM content is
taken using the k input lines. Since each of the k input lines can have a
value of 0 or 1, there are a total of 2 k addresses that can be referred to
by these input lines, and each of these addresses contains n bits of
information that is output from the ROM.
A ROM of this type is designated as a 2k x n ROM.
Block Diagram of ROM
Internal Structure of ROM
The internal structure of ROM have two basic components.
Decoder
OR gates
Internal Structure of ROM
A circuit known as a decoder converts an encoded form, such as binary
coded decimal, or BCD, into a decimal form. As a result, the output is the
binary equivalent of the input. The outputs of the decoder will be the
output of every OR gate in the ROM. Let’s use a 64 x 4 ROM as an
example. This read-only memory has 64 words with a 4 bit length. As a
result, there would be four output lines. Since there are only six input lines
and there are 64 words in this ROM, we can specify 64 addresses or
minimum terms by choosing one of the 64 words that are available on the
output lines from the six input lines. Each address entered has a unique
selected word.
Working of ROM
A small, long-lasting battery within the computer powers the ROM, which
is made up of two primary components: the OR logic gates and the
decoder. In ROM, the decoder receives binary input and produces decimal
output. The decoder’s decimal output serves as the input for ROM’s OR
gates. ROM chips have a grid of columns and rows that may be switched
on and off. If they are turned on, the value is 1, and the lines are
connected by a diode. When the value is 0, the lines are not connected.
Each element in the arrangement represents one storage element on the
memory chip. The diodes allow only one direction of flow, with a specific
threshold known as forward break over. This determines the current
required before the diode passes the flow on. Silicon-based circuitry
typically has a forward break-over voltage of 0.6 V. ROM chips sometimes
transmit a charge that exceeds the forward break over to the column with
a specified row that is grounded to a specific cell. When a diode is present
in the cell, the charge transforms to the binary system, and the cell is “on”
with a value of 1.
Features of ROM
ROM is a non-volatile memory.
Information stored in ROM is permanent.
Information and programs stored on it, we can only read and cannot
modified.
Information and programs are stored on ROM in binary format.
It is used in the start-up process of the computer.
Types of Read-Only Memory (ROM)
Now we will discuss the types of ROM one by one:
1. MROM (Masked read-only memory): We know that ROM is as old
as semiconductor technology. MROM was the very first ROM that
consists of a grid of word lines and bit lines joined together
transistor switches. This type of ROM data is physically encoded in
the circuit and only be programmed during fabrication. It was not so
expensive.
2. PROM (Programmable read-only memory): PROM is a form of
digital memory. In this type of ROM, each bit is locked by a fuse or
anti-fuse. The data stored in it are permanently stored and can not
be changed or erasable. It is used in low-level programs such
as firmware or microcode.
3. EPROM (Erasable programmable read-only
memory): EPROM also called EROM, is a type of PROM but it can be
reprogrammed. The data stored in EPROM can be erased and
reprogrammed again by ultraviolet light. Reprogrammed of it is
limited. Before the era of EEPROM and flash memory, EPROM was
used in microcontrollers.
4. EEPROM (Electrically erasable programmable read-only
memory): As its name refers, it can be programmed and erased
electrically. The data and program of this ROM can be erased and
programmed about ten thousand times. The duration of erasing and
programming of the EEPROM is near about 4ms to 10ms. It is used
in microcontrollers and remote keyless systems.
Advantages of ROM
It is cheaper than RAM and it is non-volatile memory.
It is more reliable as compared to RAM.
Its circuit is simple as compared to RAM.
It doesn’t need refreshing time because it is static.
It is easy to test.
Disadvantages of ROM
It is a read-only memory, so it cannot be modified.
It is slower as compared to RAM.
Difference Between RAM and ROM
RAM ROM
RAM stands for Random Access ROM stands for Read Only
Memory. Memory.
Data in ROM can not modified or
You can modify , edit or erase data
erased, you can only read data of
in RAM.
ROM.
RAM ROM
RAM is a volatile memory that ROM is a non-volatile memory
stores data as long as power that retian data even after the
supply is given. power is turned off.
Speed of RAM is more then speed
ROM is slower then RAM.
of ROM.
ROM is cheap as compared to
RAM is costly as compared to ROM.
RAM.
A RAM chip can store only a few A ROM chip can store multiple
gigabytes (GB) of data. megabytes (MB) of data.
CPU can easily access data stored CPU cannot easily access data
in RAM. stored in ROM.
RAM is used for the temporary ROM is used to store firmware,
storage of data currently being BIOS, and other data that needs
processed by the CPU. to be retained.
Introduction of Secondary Memory (Auxiliary Memory)
Primary memory has limited storage capacity and is volatile. Secondary
memory overcomes this limitation by providing permanent storage of data
and in bulk quantity. Secondary memory is also termed external memory
and refers to the various storage media on which a computer can store
data and programs. The Secondary storage media can be fixed or
removable. Fixed Storage media is an internal storage medium like a hard
disk that is fixed inside the computer. A storage medium that is portable
and can be taken outside the computer is termed removable storage
media.
Secondary memory is a type of computer memory that is used for long-
term storage of data and programs. It is also known as auxiliary memory
or external memory, and is distinct from primary memory, which is used
for short-term storage of data and instructions that are currently being
processed by the CPU.
Secondary memory devices are typically larger and slower than primary
memory, but offer a much larger storage capacity. This makes them ideal
for storing large files such as documents, images, videos, and other
multimedia content.
Some examples of secondary memory devices include hard disk drives
(HDDs), solid-state drives (SSDs), magnetic tapes, optical discs such as
CDs and DVDs, and flash memory such as USB drives and memory cards.
Each of these devices uses different technologies to store data, but they
all share the common feature of being non-volatile, meaning that they can
store data even when the computer is turned off.
Secondary memory devices are accessed by the CPU via input/output (I/O)
operations, which involve transferring data between the device and
primary memory. The speed of these operations is affected by factors
such as the type of device, the size of the file being accessed, and the
type of connection between the device and the computer.
Overall, secondary memory is an essential component of modern
computing systems and plays a critical role in the storage and retrieval of
data and programs.
Difference between Primary Memory and Secondary Memory:
Primary Memory Secondary Memory
Secondary memory is not accessed
directly by the Central Processing
Primary memory is directly Unit(CPU). Instead, data accessed
accessed by the Central from a secondary memory is first
Processing Unit(CPU). loaded into Random Access
Memory(RAM) and is then sent to
the Processing Unit.
RAM provides a much faster-
accessing speed to data than
Secondary Memory is slower in
secondary memory. By loading
data accessing. Typically primary
software programs and required
memory is six times faster than
files into primary memory(RAM),
secondary memory.
computers can process data
much more quickly.
Secondary memory provides a
Primary memory, i.e. Random
feature of being non-volatile, which
Access Memory(RAM) is volatile
means it can hold on to its data
and gets completely erased when
with or without electrical power
a computer is shut down.
supply.
Uses of Secondary Media:
Permanent Storage: Primary Memory (RAM) is volatile, i.e. it loses
all information when the electricity is turned off, so in order to
secure the data permanently in the device, Secondary storage
devices are needed.
Portability: Storage mediums, like CDs, flash drives can be used to
transfer the data from one device to another.
Fixed and Removable Storage
Fixed Storage-
Fixed storage is an internal media device that is used by a computer
system to store data, and usually, these are referred to as the Fixed disk
drives or Hard Drives.
Fixed storage devices are literally not fixed, obviously, these can be
removed from the system for repairing work, maintenance purposes, and
also for an upgrade, etc. But in general, this can’t be done without a
proper toolkit to open up the computer system to provide physical access,
and that needs to be done by an engineer.
Technically, almost all of the data i.e. being processed on a computer
system is stored on some type of a built-in fixed storage device.
Types of fixed storage:
Internal flash memory (rare)
SSD (solid-state disk) units
Hard disk drives (HDD)
Removable Storage-
Removable storage is an external media device that is used by a
computer system to store data, and usually, these are referred to as the
Removable Disks drives or the External Drives.
Removable storage is any type of storage device that can be
removed/ejected from a computer system while the system is running.
Examples of external devices include CDs, DVDs, and Blu-ray disk drives,
as well as diskettes and USB drives. Removable storage makes it easier
for a user to transfer data from one computer system to another.
In storage factors, the main benefit of removable disks is that they can
provide the fast data transfer rates associated with storage area networks
(SANs)
Types of Removable Storage:
Optical discs (CDs, DVDs, Blu-ray discs)
Memory cards
Floppy disks
Magnetic tapes
Disk packs
Paper storage (punched tapes, punched cards)
Secondary Storage Media
There are the following main types of storage media:
1. Magnetic storage media:
Magnetic media is coated with a magnetic layer that is magnetized in
clockwise or anticlockwise directions. When the disk moves, the head
interprets the data stored at a specific location in binary 1s and 0s at
reading.
Examples: hard disks, floppy disks, and magnetic tapes.
Floppy Disk: A floppy disk is a flexible disk with a magnetic coating
on it. It is packaged inside a protective plastic envelope. These are
one of the oldest types of portable storage devices that could store
up to 1.44 MB of data but now they are not used due to very little
memory storage.
Hard disk: A hard disk consists of one or more circular disks called
platters which are mounted on a common spindle. Each surface of a
platter is coated with magnetic material. Both surfaces of each disk
are capable of storing data except the top and bottom disks where
only the inner surface is used. The information is recorded on the
surface of the rotating disk by magnetic read/write heads. These
heads are joined to a common arm known as the access arm.
Hard disk drive components:
Most of the basic types of hard drives contain a number of disk platters
that are placed around a spindle which is placed inside a sealed chamber.
The chamber also includes read/write heads and motors. Data is stored on
each of these disks in the arrangement of concentric circles called tracks
which are divided further into sectors. Though internal Hard drives are not
very portable and are used internally in a computer system, external hard
disks can be used as a substitute for portable storage. Hard disks can
store data up to several terabytes.
2. Optical storage media
In optical storage, media information is stored and read using a laser
beam. The data is stored as a spiral pattern of pits and ridges denoting
binary 0 and binary 1.
Examples: CDs and DVDs
Compact Disk: A Compact Disc drive(CDD) is a device that a
computer uses to read data that is encoded digitally on a compact
disc(CD). A CD drive can be installed inside a computer’s
compartment, provided with an opening for easier disc tray access
or it can be used by a peripheral device connected to one of the
ports provided in the computer system. A compact disk or CD can
store approximately 650 to 700 megabytes of data. A computer
should possess a CD Drive to read the CDs. There are three types of
CDs:
CD- ROM CD-R CD-RW
It stands for It stands for
It stands for Compact Disk –
Compact Disk- Compact Disk-
Read Only Memory
Recordable. Rewritable.
Data is written on these
disks at the time of
It can be read or
manufacture. This data Data can be
written multiple
cannot be changed, once is recorded on these
times but a CD-
it written by the disks but only once.
RW drive needs
manufacturer, but can only Once the data is
to be installed
be read. CD- ROMs are used written in a CD-R, it
on your
for text, audio and video cannot be
computer before
distribution like games, erased/modified.
editing a CD-RW.
encyclopedias, and
application software.
DVD:
It stands for Digital Versatile Disk or Digital Video Disk. It looks just
like a CD and uses similar technology as that of the CDs but allows
tracks to be spaced closely enough to store data that is more than
six times the CD’s capacity. It is a significant advancement in
portable storage technology. A DVD holds 4.7 GB to 17 GB of data.
Blue Ray Disk:
This is the latest optical storage media to store high-definition audio
and video. It is similar to a CD or DVD but can store up to 27 GB of
data on a single-layer disc and up to 54 GB of data on a dual-layer
disk. While CDs or DVDs use a red laser beam, the blue-ray disk
uses a blue laser to read/write data on a disk.
3. Solid State Memories
Solid-state storage devices are based on electronic circuits with no moving
parts like the reels of tape, spinning discs, etc. Solid-state storage devices
use special memories called flash memory to store data. A solid state
drive (or flash memory) is used mainly in digital cameras, pen drives, or
USB flash drives.
Pen Drives:
Pen Drives or Thumb drives or Flash drives are the recently emerged
portable storage media. It is an EEPROM-based flash memory that
can be repeatedly erased and written using electric signals. This
memory is accompanied by a USB connector which enables the pen
drive to connect to the computer. They have a capacity smaller than
a hard disk but greater than a CD. Pendrive has the following
advantages:
Transfer Files:
A pen drive is plugged into a USB port of the system can be used as
a device to transfer files, documents, and photos to a PC and also
vice versa. Similarly, selected files can be transferred between a
pen drive and any type of workstation.
Portability:
The lightweight nature and smaller size of a pen drive make it
possible to carry it from place to place which makes data
transportation an easier task.
Backup Storage:
Most of the pen drives now come with the feature of having
password encryption, important information related to family,
medical records, and photos can be stored on them as a backup.
Transport Data:
Professionals/Students can now easily transport large data files and
video/audio lectures on a pen drive and gain access to them from
anywhere. Independent PC technicians can store work-related utility
tools, various programs, and files on a high-speed 64 GB pen drive
and move from one site to another.
Advantages:
1. Large storage capacity: Secondary memory devices typically have a
much larger storage capacity than primary memory, allowing users
to store large amounts of data and programs.
2. Non-volatile storage: Data stored on secondary memory devices is
typically non-volatile, meaning it can be retained even when the
computer is turned off.
3. Portability: Many secondary memory devices are portable, making it
easy to transfer data between computers or devices.
4. Cost-effective: Secondary memory devices are generally more cost-
effective than primary memory.
Disadvantages:
1. Slower access times: Accessing data from secondary memory
devices typically takes longer than accessing data from primary
memory.
2. Mechanical failures: Some types of secondary memory devices, such
as hard disk drives, are prone to mechanical failures that can result
in data loss.
3. Limited lifespan: Secondary memory devices have a limited lifespan,
and can only withstand a certain number of read and write cycles
before they fail.
4. Data corruption: Data stored on secondary memory devices can
become corrupted due to factors such as electromagnetic
interference, viruses, or physical damage.
5. Overall, secondary memory is an essential component of modern
computing systems, but it also has its limitations and drawbacks.
The choice of a particular secondary memory device depends on the
user’s specific needs and requirements.
Memory Hierarchy Design and its Characteristics
In the Computer System Design, Memory Hierarchy is an enhancement to
organize the memory such that it can minimize the access time. The
Memory Hierarchy was developed based on a program behavior known as
locality of references. The figure below clearly demonstrates the different
levels of the memory hierarchy.
Why Memory Hierarchy is Required in the System?
Memory Hierarchy is one of the most required things in Computer
Memory as it helps in optimizing the memory available in the computer.
There are multiple levels present in the memory, each one having a
different size, different cost, etc. Some types of memory like cache, and
main memory are faster as compared to other types of memory but they
are having a little less size and are also costly whereas some memory has
a little higher storage value, but they are a little slower. Accessing of data
is not similar in all types of memory, some have faster access whereas
some have slower access.
Types of Memory Hierarchy
This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Comprising of
Magnetic Disk, Optical Disk, and Magnetic Tape i.e. peripheral
storage devices which are accessible by the processor via an I/O
Module.
Internal Memory or Primary Memory: Comprising of Main
Memory, Cache Memory & CPU registers . This is directly accessible
by the processor.
Memory Hierarchy Design
Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are
used to store the most frequently used data and instructions. Registers
have the fastest access time and the smallest storage capacity, typically
ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It
stores frequently used data and instructions that have been recently
accessed from the main memory. Cache memory is designed to minimize
the time it takes to access data by providing the CPU with quick access to
frequently used data.
3. Main Memory
Main memory , also known as RAM (Random Access Memory), is the
primary memory of a computer system. It has a larger storage capacity
than cache memory, but it is slower. Main memory is used to store data
and instructions that are currently in use by the CPU.
Types of Main Memory
Static RAM: Static RAM stores the binary information in flip flops
and information remains valid until power is supplied. It has a faster
access time and is used in implementing cache memory.
Dynamic RAM: It stores the binary information as a charge on the
capacitor. It requires refreshing circuitry to maintain the charge on
the capacitors after a few milliseconds. It contains more memory
cells per unit area as compared to SRAM.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives
(SSD) , is a non-volatile memory unit that has a larger storage capacity
than main memory. It is used to store data and instructions that are not
currently in use by the CPU. Secondary storage has the slowest access
time and is typically the least expensive type of memory in the memory
hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a
metal or a plastic or a magnetized material. The Magnetic disks work at a
high speed inside the computer and these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with
a plastic film. It is generally used for the backup of data. In the case of a
magnetic tape, the access time for a computer is a little slower and
therefore, it requires some amount of time for accessing the strip.
Characteristics of Memory Hierarchy
Capacity: It is the global volume of information the memory can
store. As we move from top to bottom in the Hierarchy, the capacity
increases.
Access Time: It is the time interval between the read/write request
and the availability of the data. As we move from top to bottom in
the Hierarchy, the access time increases.
Performance: Earlier when the computer system was designed
without a Memory Hierarchy design, the speed gap increased
between the CPU registers and Main Memory due to a large
difference in access time. This results in lower performance of the
system and thus, enhancement was required. This enhancement
was made in the form of Memory Hierarchy Design because of which
the performance of the system increases. One of the most
significant ways to increase system performance is minimizing how
far down the memory hierarchy one has to go to manipulate data.
Cost Per Bit: As we move from bottom to top in the Hierarchy, the
cost per bit increases i.e. Internal Memory is costlier than External
Memory.
Advantages of Memory Hierarchy
It helps in removing some destruction, and managing the memory in
a better way.
It helps in spreading the data all over the computer system.
It saves the consumer’s price and time.
Associative memory is also known as content addressable memory
(CAM) or associative storage or associative array. It is a special type of
memory that is optimized for performing searches through data, as
opposed to providing a simple direct access to the data based on the
address.
It can store the set of patterns as memories when the associative memory
is being presented with a key pattern, it responds by producing one of the
stored pattern which closely resembles or relates to the key pattern.
It can be viewed as data correlation here. input data is correlated with
that of stored data in the CAM.
It forms of two type:
1. auto associative memory network : An auto-associative memory
network, also known as a recurrent neural network, is a type of
associative memory that is used to recall a pattern from partial or
degraded inputs. In an auto-associative network, the output of the
network is fed back into the input, allowing the network to learn and
remember the patterns it has been trained on. This type of memory
network is commonly used in applications such as speech and
image recognition, where the input data may be incomplete or
noisy.
2. hetero associative memory network : A hetero-associative memory
network is a type of associative memory that is used to associate
one set of patterns with another. In a hetero-associative network,
the input pattern is associated with a different output pattern,
allowing the network to learn and remember the associations
between the two sets of patterns. This type of memory network is
commonly used in applications such as data compression and data
retrieval.
Associative memory of conventional semiconductor memory (usually RAM)
with added comparison circuity that enables a search operation to
complete in a single clock cycle. It is a hardware search engine, a special
type of computer memory used in certain very high searching
applications.
Associative memory, or content-addressable memory, allows data to be
accessed based on content rather than location. It’s particularly useful in
high-speed searching applications.
How Does Associative Memory Work?
In conventional memory, data is stored in specific locations, called
addresses, and retrieved by referencing those addresses. In associative
memory, data is stored together with additional tags or metadata that
describe its content. When a search is performed, the associative memory
compares the search query with the tags of all stored data, and retrieves
the data that matches the query.
Associative memory is designed to quickly find matching data, even when
the search query is incomplete or imprecise. This is achieved by using
parallel processing techniques, where multiple search queries can be
performed simultaneously. The search is also performed in a single step,
as opposed to conventional memory where multiple steps are required to
locate the data.
Hardware organization of associative memory:-
Block Diagram of associative memory
Argument Register: It contains words to be searched. It contains ‘n’
number of bits.
Match Register: It has m-bits, One bit corresponding to each word in
the memory array. After the making process, the bits corresponding
to matching words in match register are set to ‘1’.
Key Register: It provides a mask of choosing a particular field/key in
argument register. It specifies which part of the argument word
need to be compared with words in memory.
Associative Memory Array: It combines word in that are to be
compared with the arguments word in parallel. It contains ‘m’ words
with ‘n’ bit per word.
Applications of Associative memory :-
1. It can be only used in memory allocation format.
2. It is widely used in the database management systems, etc.
3. Networking: Associative memory is used in network routing tables to
quickly find the path to a destination network based on its address.
4. Image processing: Associative memory is used in image processing
applications to search for specific features or patterns within an
image.
5. Artificial intelligence: Associative memory is used in artificial
intelligence applications such as expert systems and pattern
recognition.
6. Database management: Associative memory can be used in
database management systems to quickly retrieve data based on its
content.
Advantages of Associative memory :-
1. It is used where search time needs to be less or short.
2. It is suitable for parallel searches.
3. It is often used to speedup databases.
4. It is used in page tables used by the virtual memory and used in
neural networks.
Disadvantages of Associative memory :-
1. It is more expensive than RAM
1. Each cell must have storage capability and logical circuits for
matching its content with external argument.
Virtual Memory in Operating System
Virtual Memory is a storage allocation scheme in which secondary
memory can be addressed as though it were part of the main memory.
The addresses a program may use to reference memory are distinguished
from the addresses the memory system uses to identify physical storage
sites and program-generated addresses are translated automatically to
the corresponding machine addresses.
What is Virtual Memory?
Virtual memory is a memory management technique used by operating
systems to give the appearance of a large, continuous block of memory to
applications, even if the physical memory (RAM) is limited. It allows the
system to compensate for physical memory shortages, enabling larger
applications to run on systems with less RAM.
A memory hierarchy, consisting of a computer system’s memory and a
disk, enables a process to operate with only some portions of its address
space in memory. A virtual memory is what its name indicates- it is an
illusion of a memory that is larger than the real memory. We refer to the
software component of virtual memory as a virtual memory manager. The
basis of virtual memory is the non-contiguous memory allocation model.
The virtual memory manager removes some components from memory to
make room for other components.
The size of virtual storage is limited by the addressing scheme of the
computer system and the amount of secondary memory available not by
the actual number of main storage locations.
How Virtual Memory Works?
Virtual Memory is a technique that is implemented using both hardware
and software. It maps memory addresses used by a program, called
virtual addresses, into physical addresses in computer memory.
All memory references within a process are logical addresses that
are dynamically translated into physical addresses at run time. This
means that a process can be swapped in and out of the main
memory such that it occupies different places in the main memory
at different times during the course of execution.
A process may be broken into a number of pieces and these pieces
need not be continuously located in the main memory during
execution. The combination of dynamic run-time address translation
and the use of a page or segment table permits this.
If these characteristics are present then, it is not necessary that all the
pages or segments are present in the main memory during execution. This
means that the required pages need to be loaded into memory whenever
required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.
Types of Virtual Memory
In a computer, virtual memory is managed by the Memory Management
Unit (MMU), which is often built into the CPU. The CPU generates virtual
addresses that the MMU translates into physical addresses.
There are two main types of virtual memory:
Paging
Segmentation
Paging
Paging divides memory into small fixed-size blocks called pages. When the
computer runs out of RAM, pages that aren’t currently in use are moved to
the hard drive, into an area called a swap file. The swap file acts as an
extension of RAM. When a page is needed again, it is swapped back into
RAM, a process known as page swapping. This ensures that the operating
system (OS) and applications have enough memory to run.
Demand Paging: The process of loading the page into memory on
demand (whenever a page fault occurs) is known as demand paging. The
process includes the following steps are as follows:
If the CPU tries to refer to a page that is currently not available in
the main memory, it generates an interrupt indicating a memory
access fault.
The OS puts the interrupted process in a blocking state. For the
execution to proceed the OS must bring the required page into the
memory.
The OS will search for the required page in the logical address
space.
The required page will be brought from logical address space to
physical address space. The page replacement algorithms are used
for the decision-making of replacing the page in physical address
space.
The page table will be updated accordingly.
The signal will be sent to the CPU to continue the program execution
and it will place the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the
operating system and the required page is brought into memory.
What is Page Fault Service Time?
The time taken to service the page fault is called page fault service time.
The page fault service time includes the time taken to perform all the
above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Segmentation
Segmentation divides virtual memory into segments of different sizes.
Segments that aren’t currently needed can be moved to the hard drive.
The system uses a segment table to keep track of each segment’s status,
including whether it’s in memory, if it’s been modified, and its physical
address. Segments are mapped into a process’s address space only when
needed.
Combining Paging and Segmentation
Sometimes, both paging and segmentation are used together. In this case,
memory is divided into pages, and segments are made up of multiple
pages. The virtual address includes both a segment number and a page
number.
Virtual Memory vs Physical Memory
Feature Virtual Memory Physical Memory (RAM)
An abstraction that
The actual hardware (RAM) that
Definitio extends the available
stores data and instructions
n memory by using disk
currently being used by the CPU
storage
On the computer’s
Location On the hard drive or SSD
motherboard
Slower (due to disk I/O Faster (accessed directly by the
Speed
operations) CPU)
Capacit Larger, limited by disk Smaller, limited by the amount
Feature Virtual Memory Physical Memory (RAM)
y space of RAM installed
Lower (cost of additional
Cost Higher (cost of RAM modules)
disk storage)
Data Indirect (via paging and Direct (CPU can access data
Access swapping) directly)
Volatilit Non-volatile (data Volatile (data is lost when
y persists on disk) power is off)
What is Swapping?
Swapping is a process out means removing all of its pages from memory,
or marking them so that they will be removed by the normal page
replacement process. Suspending a process ensures that it is not runnable
while it is swapped out. At some later time, the system swaps back the
process from the secondary storage to the main memory. When a process
is busy swapping pages in and out then this situation is called thrashing.
What is Thrashing?
At any given time, only a few pages of any process are in the main
memory, and therefore more processes can be maintained in memory.
Furthermore, time is saved because unused pages are not swapped in and
out of memory. However, the OS must be clever about how it manages
this scheme. In the steady state practically, all of the main memory will be
occupied with process pages, so that the processor and OS have direct
access to as many processes as possible. Thus when the OS brings one
page in, it must throw another out. If it throws out a page just before it is
used, then it will just have to get that page again almost immediately. Too
much of this leads to a condition called Thrashing. The system spends
most of its time swapping pages rather than executing instructions. So a
good page replacement algorithm is required.
In the given diagram, the initial degree of multiprogramming up to some
extent of point(lambda), the CPU utilization is very high and the system
resources are utilized 100%. But if we further increase the degree of
multiprogramming the CPU utilization will drastically fall down and the
system will spend more time only on the page replacement and the time
taken to complete the execution of the process will increase. This situation
in the system is called thrashing.
Causes of Thrashing
Thrashing occurs in a computer system when the CPU spends more time
swapping pages in and out of memory than executing actual processes.
This happens when there is insufficient physical memory, causing frequent
page faults and excessive paging activity. Thrashing reduces system
performance and makes processes run very slowly. There are many cause
of thrashing as discussed below.
1. High Degree of Multiprogramming: If the number of processes
keeps on increasing in the memory then the number of frames allocated
to each process will be decreased. So, fewer frames will be available for
each process. Due to this, a page fault will occur more frequently and
more CPU time will be wasted in just swapping in and out of pages and the
utilization will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of processes = 100
Then, each process will get 4 frames.
Case 2: Number of processes = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is
increased, frames per process are decreased. Hence CPU time will be
consumed just by swapping pages.
2. Lacks of Frames: If a process has fewer frames then fewer pages of
that process will be able to reside in memory and hence more frequent
swapping in and out will be required. This may lead to thrashing. Hence a
sufficient amount of frames must be allocated to each process in order to
prevent thrashing.
Recovery of Thrashing
Do not allow the system to go into thrashing by instructing the long-
term scheduler not to bring the processes into memory after the
threshold.
If the system is already thrashing then instruct the mid-term
scheduler to suspend some of the processes so that we can recover
the system from thrashing.
Performance in Virtual Memory
Let p be the page fault rate( 0 <= p <= 1).
if p = 0 no page faults
if p =1, every reference is a fault.
Effective access time (EAT) = (1-p)* Memory Access Time + p * Page
fault time.
Page fault time = page fault overhead + swap out + swap in +restart
overhead
The performance of a virtual memory management system depends on
the total number of page faults, which depend on “paging policies” and
“frame allocation“
Frame Allocation
A number of frames allocated to each process in either static or
dynamic.
Static Allocation: The number of frame allocations to a process is
fixed.
Dynamic Allocation: The number of frames allocated to a process
changes.
Paging Policies
Fetch Policy: It decides when a page should be loaded into
memory.
Replacement Policy: It decides which page in memory should be
replaced.
Placement Policy: It decides where in memory should a page be
loaded.
What are the Applications of Virtual memory?
Virtual memory has the following important characteristics that increase
the capabilities of the computer system. The following are five significant
characteristics of Lean.
Increased Effective Memory: One major practical application of
virtual memory is, virtual memory enables a computer to have more
memory than the physical memory using the disk space. This allows
for the running of larger applications and numerous programs at one
time while not necessarily needing an equivalent amount of DRAM.
Memory Isolation: Virtual memory allocates a unique address
space to each process and that also plays a role in process
segmentation. Such separation increases safety and reliability based
on the fact that one process cannot interact with and or modify
another’s memory space through a mistake, or even a deliberate act
of vandalism.
Efficient Memory Management: Virtual memory also helps in
better utilization of the physical memories through methods that
include paging and segmentation. It can transfer some of the
memory pages that are not frequently used to disk allowing RAM to
be used by active processes when required in a way that assists in
efficient use of memory as well as system performance.
Simplified Program Development: For case of programmers,
they don’t have to consider physical memory available in a system
in case of having virtual memory. They can program ‘as if’ there is
one big block of memory and this makes the programming easier
and more efficient in delivering more complex applications.
How to Manage Virtual Memory?
Here are 5 key points on how to manage virtual memory:
1. Adjust the Page File Size
Automatic Management: All contemporary operating systems
including Windows contain the auto-configuration option for the size
of the empirical page file. But depending on the size of the RAM,
they are set automatically, although the user can manually adjust
the page file size if required.
Manual Configuration: For tuned up users, the setting of the
custom size can sometimes boost up the performance of the
system. The initial size is usually advised to be set to the minimum
value of 1. To set the size of the swap space equal to 5 times the
amount of physical RAM and the maximum size 3 times the physical
RAM.
2. Place the Page File on a Fast Drive
SSD Placement: If this is feasible, the page file should be stored in
the SSD instead of the HDD as a storage device. It has better read
and write times, and the virtual memory may prove benefecial in an
SSD.
Separate Drive: Regarding systems having multiple drives
involved, the page file needs to be placed on a different drive than
the os and that shall in turn improve its performance.
3. Monitor and Optimize Usage
Performance Monitoring: Employ the software tools used in
monitoring the performance of the system in tracking the amounts
of virtual memory. High page file usage may signify that there is a
lack of physical RAM or that virtual memory needs a change of
settings or addition in physical RAM.
Regular Maintenance: Make sure there is no toolbar or other
application running in the background, take time and uninstall all
the tool bars to free virtual memory.
4. Disable Virtual Memory for SSD
Sufficient RAM: If for instance your system has a big physical
memory, for example 16GB and above then it would be advised to
freeze the page file in order to minimize SSD usage. But it should be
done, in my opinion, carefully and only if the additional signals that
one decides to feed into his applications should not likely use all the
available RAM.
5. Optimize System Settings
System Configuration: Change some general properties of the
system concerning virtual memory efficiency. This also involves
enabling additional control options in Windows such as adjusting
additional system setting option on the operating system, or using
other options in different operating systems such as Linux that
provides different tools and commands to help in adjusting how
virtual memory is utilized.
Regular Updates: Ensure that your drivers are run in their newest
version because new releases contain some enhancements and
issues regarding memory management.
What are the Benefits of Using Virtual Memory?
Many processes maintained in the main memory.
A process larger than the main memory can be executed because of
demand paging. The OS itself loads pages of a process in the main
memory as required.
It allows greater multiprogramming levels by using less of the
available (primary) memory for each process.
It has twice the capacity for addresses as main memory.
It makes it possible to run more applications at once.
Users are spared from having to add memory modules when RAM
space runs out, and applications are liberated from shared memory
management.
When only a portion of a program is required for execution, speed
has increased.
Memory isolation has increased security.
It makes it possible for several larger applications to run at once.
Memory allocation is comparatively cheap.
It doesn’t require outside fragmentation.
It is efficient to manage logical partition workloads using the CPU.
Automatic data movement is possible.
What are the Limitation of Virtual Memory?
It can slow down the system performance, as data needs to be
constantly transferred between the physical memory and the hard
disk.
It can increase the risk of data loss or corruption, as data can be lost
if the hard disk fails or if there is a power outage while data is being
transferred to or from the hard disk.
It can increase the complexity of the memory management system,
as the operating system needs to manage both physical and virtual
memory.
Cache Memory in Computer Organization
Cache memory is a small, high-speed storage area in a computer. The
cache is a smaller and faster memory that stores copies of the data from
frequently used main memory locations. There are various independent
caches in a CPU, which store instructions and data. The most important
use of cache memory is that it is used to reduce the average time to
access data from the main memory.
By storing this information closer to the CPU, cache memory helps speed
up the overall processing time. Cache memory is much faster than the
main memory (RAM). When the CPU needs data, it first checks the cache.
If the data is there, the CPU can access it quickly. If not, it must fetch the
data from the slower main memory.
Characteristics of Cache Memory
Cache memory is an extremely fast memory type that acts as a
buffer between RAM and the CPU.
Cache Memory holds frequently requested data and instructions so
that they are immediately available to the CPU when needed.
Cache memory is costlier than main memory or disk memory but
more economical than CPU registers.
Cache Memory is used to speed up and synchronize with a high-
speed CPU.
Cache Memory
Levels of Memory
Level 1 or Register: It is a type of memory in which data is stored
and accepted that are immediately stored in the CPU. The most
commonly used register is Accumulator, Program counter , Address
Register, etc.
Level 2 or Cache memory: It is the fastest memory that has faster
access time where data is temporarily stored for faster access.
Level 3 or Main Memory: It is the memory on which the computer
works currently. It is small in size and once power is off data no
longer stays in this memory.
Level 4 or Secondary Memory: It is external memory that is not
as fast as the main memory but data stays permanently in this
memory.
Cache Performance
When the processor needs to read or write a location in the main memory,
it first checks for a corresponding entry in the cache.
If the processor finds that the memory location is in the cache,
a Cache Hit has occurred and data is read from the cache.
If the processor does not find the memory location in the cache,
a cache miss has occurred. For a cache miss, the cache allocates a
new entry and copies in data from the main memory, then the
request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a
quantity called Hit ratio.
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit
ratio(H)
We can improve Cache performance using higher cache block size, and
higher associativity, reduce miss rate, reduce miss penalty, and reduce
the time to hit in the cache.
Cache Mapping
There are three different types of mapping used for the purpose of cache
memory which is as follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each block of
main memory into only one possible cache line. or In Direct mapping,
assign each memory block to a specific line in the cache. If a line is
previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is split into two parts
index field and a tag field. The cache is used to store the tag field whereas
the rest is stored in the main memory. Direct mapping`s performance is
directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
Direct Mapping
For purposes of cache access, each main memory address can be viewed
as consisting of three fields. The least significant w bits identify a unique
word or byte within a block of main memory. In most contemporary
machines, the address is at the byte level. The remaining s bits specify
one of the 2 s blocks of main memory. The cache logic interprets these s
bits as a tag of s-r bits (the most significant portion) and a line field of r
bits. This latter field identifies one of the m=2 r lines of the cache. Line
offset is index bits in the direct mapping.
Direct Mapping – Structure
2. Associative Mapping
In this type of mapping, associative memory is used to store the content
and addresses of the memory word. Any block can go into any line of the
cache. This means that the word id bits are used to identify which word in
the block is needed, but the tag becomes all of the remaining bits. This
enables the placement of any word at any place in the cache memory. It is
considered to be the fastest and most flexible mapping form. In
associative mapping, the index bits are zero.
Associative Mapping – Structure
3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the
drawbacks of direct mapping are removed. Set associative addresses the
problem of possible thrashing in the direct mapping method. It does this
by saying that instead of having exactly one line that a block can map to
in the cache, we will group a few lines together creating a set . Then a
block in memory can map to any one of the lines of a specific set. Set-
associative mapping allows each word that is present in the cache can
have two or more words in the main memory for the same index address.
Set associative cache mapping combines the best of direct and
associative cache mapping techniques. In set associative mapping the
index bits are given by the set offset bits. In this case, the cache consists
of a number of sets, each of which consists of a number of lines.
Set-Associative Mapping
Relationships in the Set-Associative Mapping can be defined as:
m=v*k
i= j mod v
where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
Set-Associative Mapping – Structure
Application of Cache Memory
Here are some of the applications of Cache Memory.
Primary Cache: A primary cache is always located on the
processor chip. This cache is small and its access time is
comparable to that of processor registers.
Secondary Cache: Secondary cache is placed between the primary
cache and the rest of the memory. It is referred to as the level 2 (L2)
cache. Often, the Level 2 cache is also housed on the processor
chip.
Spatial Locality of Reference: Spatial Locality of Reference says
that there is a chance that the element will be present in close
proximity to the reference point and next time if again searched
then more close proximity to the point of reference.
Temporal Locality of Reference: Temporal Locality of
Reference uses the Least recently used algorithm will be used.
Whenever there is page fault occurs within a word will not only load
the word in the main memory but the complete page fault will be
loaded because the spatial locality of reference rule says that if you
are referring to any word next word will be referred to in its register
that’s why we load complete page table so the complete block will
be loaded.
Advantages
Cache Memory is faster in comparison to main memory and
secondary memory.
Programs stored by Cache Memory can be executed in less time.
The data access time of Cache Memory is less than that of the main
memory.
Cache Memory stored data and instructions that are regularly used
by the CPU, therefore it increases the performance of the CPU.
Disadvantages
Cache Memory is costlier than primary memory and secondary
memory .
Data is stored on a temporary basis in Cache Memory.
Whenever the system is turned off, data and instructions stored in
cache memory get destroyed.
The high cost of cache memory increases the price of the Computer
System.
Memory Management Hardware
A memory management system is a collection of hardware and software
procedures for
managing the various programs residing in memory. The memory
management software is part of an overall operating system available in
many computers.
The basic components of a memory management unit are:
1. A facility for dynamic storage relocation that maps logical memory
references into physical memory addresses.
2. A provision for sharing common programs stored in memory by
different users.
3. Protection of information against unauthorized access between
users and preventing users from changing operating system functions.
The fixed page size used in the virtual memory system causes certain
difficulties with respect to program size and the logical structure of
programs. It is more convenient to divide programs and segment data into
logical parts called segments. A segment is a set of logically related
instructions or data elements associated with a given name. Segments
may be generated by the programmer or by the operating system.
Examples of segments are a subroutine, an array of data, a table
of symbols, or a user's program. The address generated by a
segmented program is called a logical address. The logical address
may be larger than the physical memory address as in virtual memory,
but it may also be equal, and sometimes even smaller than the
length of the physical memory address.