Slide 7 OS Memory Management 2025
Slide 7 OS Memory Management 2025
Memory Management
Main Memory
CPU cache
instructions
Registers
data Process
program image
in memory
Operating
System
Disk
2
Memory Management
• Memory management is the process of controlling and coordinating
computer memory
– assigning blocks to various running programs and
– managing the deallocation of memory when it is no longer needed
3
Memory Management
• Hardware (Physical Memory):
• RAM (Random Access Memory): Primary memory where active
processes and data reside.
• Cache Memory: Small, high-speed memory that stores frequently
used data for quick access.
• Registers: Ultra-fast storage inside the CPU for immediate processing.
• Hard Disk (Virtual Memory): Acts as an extension of RAM when
physical memory is full (swap space).
4
Memory Management
• Operating System (Memory Manager):
• Allocation & Deallocation: Assigns memory to processes and
reclaims it when no longer needed.
• Memory Protection: Prevents processes from accessing each other's
memory space.
• Paging & Segmentation: Techniques to efficiently manage memory.
• Garbage Collection: Frees unused memory.
6
Memory Management
• In order to manage memory effectively the OS must have
– Memory allocation policies
– Methods to track the status of memory locations (free or
allocated)
– Policies for preempting memory from one process to allocate
to another
7
Memory Management Requirements
• Memory management is intended to satisfy the following
requirements:
− Relocation
− Protection
− Sharing
− Logical organization
− Physical organization
8
Requirements: Relocation
• When programs are loaded into memory, they might not always be
placed in the same location.
• Available memory is generally shared among a number of processes
• Programmer does not know where the program will be placed in
memory when it is executed
• Active processes need to be able to swapped in and out of memory
in order to maximize processor utilization
• The OS ensures that a process can be moved (relocated) in
memory without affecting execution.
• Uses base and limit registers or dynamic address translation (e.g.,
page tables).
9
Requirements: Protection
10
Requirements: Sharing
11
Requirements: Logical Organization
12
Requirements: Physical Organization
• Manages how memory is physically stored (RAM, cache, disk).
• Uses hierarchical memory management:
– registers → cache → RAM → disk.
• Virtual memory allows processes to use more memory than
physically available by swapping data between RAM and disk.
13
Simple Memory Management
14
Memory Management
• Memory Management
– responsible for allocating and managing computer’s main memory
– keeps track of the status of each memory location, either allocated or
free to ensure effective and efficient use of Primary Memory
• There are two Memory Management Techniques:
– Contiguous, and
– Non-Contiguous
15
Memory Management
16
Fixed Partitions
17
Fixed Partitions
• Two possibilities:
– Equal size partitioning
– Unequal size Partition
18
Fixed Partitions
Advantages:
• Simple to Implement: Easy to design and manage since partitions
are predefined at system startup.
• Little OS Overhead: The operating system requires minimal effort
to track and allocate partitions.
• Fast Process Allocation: The OS quickly assigns a process to a free
partition without complex algorithms.
• Efficient for Small, Fixed Workloads: Works well in systems
where the number and size of processes are predictable.
• Supports Multiprogramming: Multiple processes can run
simultaneously, improving CPU utilization.
19
Fixed Partitions
Disadvantages:
• Process Size Limitation:
– A process larger than the partition cannot be loaded.
– Solution: Overlays (splitting programs into smaller sections loaded
when needed).
• Inefficient Memory Utilization (Internal Fragmentation):
– A small process still occupies an entire partition, wasting memory.
• External Fragmentation:
– If partitions are unequal-sized, small processes may fill large partitions,
leaving gaps.
• Fixed Degree of Multiprogramming:
– The number of partitions limits how many processes can run
20 concurrently.
Unequal Size Fixed Partitions
21
Unequal Size Fixed Partitions
22
Unequal Size Partitions: Multiple queues
Partition-Specific Queues.
• Each partition has its own queue, where processes wait for a
partition of matching size.
• Advantage: Minimizes wasted memory by ensuring each process
is assigned to the best-fitting partition.
• Disadvantage: If a partition is full, smaller processes in other
queues may still have to wait, even if a larger partition is free.
23
Unequal Size Partitions: Single queue
Global Queue.
• All processes wait in a single queue, and the first available partition
is assigned.
• Advantage: Ensures better load balancing because all partitions are
utilized.
• Disadvantage: Small processes may be placed in large partitions,
causing internal fragmentation.
24
Dynamic Partitioning
Process 2 224 K
896 K
576 K
352 K
26
Example: Dynamic Partitioning
64 K 64 K 64 K
27
Example: Dynamic Partitioning
Operating Operating
System System
64 K 64 K
28
Dynamic Partitioning
• Advantages
– Efficient Memory Utilization: Each process gets exactly the
memory it needs, reducing internal fragmentation.
– Flexible Partitioning: The number and size of partitions adjust
dynamically based on process requirements.
– Better Multiprogramming: More processes can fit in memory
compared to fixed partitioning.
– No Wasted Space Due to Fixed Partitions: Unlike fixed
partitioning, there is no pre-defined partition size that might be
too large or too small.
29
Dynamic Partitioning
• Disadvantages
– External Fragmentation: Over time, holes (gaps) appear in
memory as processes terminate, making it harder to allocate
large processes.
– Compaction Overhead: To eliminate fragmentation, the OS must
move processes to merge free spaces (compaction), which is
CPU-intensive and time-consuming.
– Increased OS Overhead: Requires dynamic memory management
techniques like allocation tracking and relocation.
30
Placement Algorithm
• Used to decide which free block to allocate a process
• Goal: to reduce usage of compaction (time consuming)
• Possible algorithms:
– Best-fit
– First-fit
– Worst-fit
31
Best-fit Algorithm
• The Best-Fit algorithm finds the smallest available memory block
that is large enough to accommodate a process.
32
Best-fit Algorithm
Advantages:
• Minimizes wasted memory (reduces fragmentation) → Assigns
the tightest-fitting block to avoid large unused gaps.
• More efficient memory usage than First-Fit or Worst-Fit in some
cases.
Disadvantages:
• Can cause excessive external fragmentation → Small leftover
blocks may be too small for future processes.
• Slower allocation → Requires scanning the entire memory to find
the smallest suitable block.
33
First-fit Algorithm
• The First-Fit algorithm scans the memory from the beginning and
assigns the first available block that is large enough to
accommodate a process.
34
First-fit Algorithm
Advantages:
• Faster allocation → Stops searching as soon as it finds a suitable
block (better than Best-Fit in speed).
• Less CPU overhead → Does not scan the entire memory, unlike
Best-Fit.
Disadvantages:
• Leads to external fragmentation → Large free spaces are broken
into smaller unusable fragments.
• Can cause inefficient memory usage → Leaves larger free blocks
unused while creating many small holes.
35
Worst-fit Algorithm
• The Worst-Fit algorithm assigns a process to the largest available
memory block to leave the biggest possible free space for future
allocations.
36
Worst-fit Algorithm
Advantages:
• Reduces external fragmentation → Leaves larger free blocks that
may accommodate future processes.
• Good for large processes → Ensures larger blocks remain
available.
Disadvantages:
• Can waste large memory chunks → Large blocks get broken into
smaller, inefficient pieces.
• Slower allocation → Requires scanning the entire memory to find
the largest block.
37
Example: Placement Algorithm
• Consider six memory partitions of size 200 KB, 400 KB, 600 KB,
500 KB, 300 KB, and 250 KB.
– How would the first-fit, best-fit, and worst-fit algorithms use
these partitions to allot four processes of sizes 350 KB, 220 KB,
450 KB and 480 KB in that order.
– Which algorithm makes the most efficient use of memory?
38
Solution: Placement Algorithm
First fit Algorithm: Fixed free Partitions
• 400 KB is the first free partition, in which the process P1 (350 KB) can be
stored
• (400-350) KB = 50 KB is the internal fragmentation
39
Solution: Placement Algorithm
First fit Algorithm:
• 600 KB is the first free partition, in which the process P2 (220 KB) can be
stored
• (600-220) KB = 380 KB is the internal fragmentation
40
Solution: Placement Algorithm
First fit Algorithm:
• 500 KB is the first free partition, in which the process P3 (450 KB) can be
stored
• (500-450) KB = 50 KB is the internal fragmentation
41
Solution: Placement Algorithm
First fit Algorithm:
• Process P1 (350 KB) can be stored in 400 KB, 600 KB & 500 KB
• But out of these three partitions, 400 KB is smallest partition which can
accommodate the process P1.
• So, (400-350) KB = 50 KB is the internal fragmentation
43
Solution: Placement Algorithm
Best fit Algorithm:
• Process P2 (220 KB) can be stored in 600 KB, 500 KB, 300 KB & 250 KB.
• But out of these four partitions, 250 KB is smallest partition which can
accommodate the process P2.
• So, (250-220) KB = 30 KB is the internal fragmentation.
44
Solution: Placement Algorithm
Best fit Algorithm:
47
Solution: Placement Algorithm
Worst fit Algorithm:
• Process P1 (350 KB) can be stored in 400 KB, 600 KB & 500 KB.
• But out of these three partitions, 600 KB largest partition which can
accommodate the process P1.
• So, (600-350) KB = 250 KB is the internal fragmentation.
48
Solution: Placement Algorithm
Worst fit Algorithm:
• Process P2 (220 KB) can be stored in 400 KB, 500 KB, 300 KB & 250 KB.
• But out of these four partitions, 500 KB is the largest partition which can
accommodate the process P2.
• So, (500-220) KB = 200 KB is the internal fragmentation.
49
Solution: Placement Algorithm
Worst fit Algorithm:
• Here, for any process, it is going to chose the largest partition which can
accommodate the process.
• So, the remaining large size space is wasted.
• Leads to the large internal fragmentation.
• Also, performs worse.
51
Example: Placement Algorithm
• Consider the following snap shot containing 150 KB and 350 KB free
memory partitions (dynamic partitions).
– How would the first-fit, best-fit, and worst-fit algorithms use these partitions to
be allot four processes of sizes 300 KB, 25 KB, 125 KB and 50 KB in that order.
– Which algorithm makes the most efficient use of memory?
52
Solution: Placement Algorithm
First fit Algorithm:
• 350 KB is the first free partition, in which the process P1 (300 KB) can be
stored
• After allocation of P1, remaining 50 KB free partition is available which
can be used by other processes.
53
Solution: Placement Algorithm
First fit Algorithm:
• 150 KB is the first free partition, in which the process P2 (25 KB) can be
stored
• After allocation of P2, remaining 125 KB free partition is available which
can be used by other processes.
54
Solution: Placement Algorithm
First fit Algorithm:
56
Solution: Placement Algorithm
Best fit Algorithm:
57
Solution: Placement Algorithm
Best fit Algorithm:
58
Solution: Placement Algorithm
Best fit Algorithm:
59
Solution: Placement Algorithm
Worst fit Algorithm:
• 350 KB is the only free partition, in which the process P1 (300 KB) can be
stored
• After allocation of P1, remaining 50 KB free partition is available which
can be used by other processes.
60
Solution: Placement Algorithm
Worst fit Algorithm:
61
Solution: Placement Algorithm
Worst fit Algorithm:
62
Example: Placement Algorithm
• Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB,
600 KB (in order).
– How would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)?
– Which algorithm makes the most efficient use of memory?
63
Solution – Placement Algorithm
First-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212 KB is put in 500 KB partition (100 KB, 500-212=288 KB, 200 KB,
300 KB, 600 KB)
417 KB is put in 600 KB partition (100 KB, 288 KB, 200 KB, 300 KB,
600-417=183 KB)
112 KB is put in 288 KB partition 100 KB, 288-112=176 KB, 200 KB,
300 KB, 183 KB)
426KB must wait
64
Solution – Placement Algorithm
Best-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212KB is put in 300KB partition (100 KB, 500 KB, 200 KB, 300-
212=88 KB, 600 KB)
417KB is put in 500KB partition (100 KB, 500-417=83 KB, 200 KB, 88
KB, 600 KB)
112KB is put in 200KB partition (100 KB, 83 KB, 200-112=88 KB, 88
KB, 600 KB)
426KB is put in 600KB partition (100 KB, 83 KB, 88 KB, 88 KB, 600-
426=174 KB)
65
Solution – Placement Algorithm
Worst-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212 KB is put in 600 KB partition (100 KB, 500 KB, 200 KB, 300 KB,
600-212=388 KB)
417 KB is put in 500 KB partition (100 KB, 500-417=83 KB, 200 KB,
300 KB, 388 KB)
112 KB is put in 388 KB partition (100 KB, 83 KB, 200 KB, 300 KB,
388-112=276 KB)
426 KB must wait
66
Address Types
• Logical address
• Physical address
• Relative address
67
Logical Address Space
Logical Address (Virtual Address)
• The address generated by the CPU when a program runs.
• It is independent of physical memory and needs to be translated
before accessing RAM.
• Used by the user programs and managed by the OS.
• Generated by: The CPU.
• Converted to: Physical address by the Memory Management Unit
(MMU).
• Example: If a program requests memory at address 0x0020, the
CPU generates this as a logical address, which must be translated to
a physical address.
68
Physical Address Space
Physical Address
• The actual address in RAM (main memory) where data is stored.
• Used by the hardware (memory unit) to fetch and store data.
• Generated after address translation from a logical address.
• Example: A logical address 0x0020 might be mapped to a physical
address 0xA020 in RAM.
• Physical Address Space: set of all physical addresses
corresponding to the logical addresses
69
Mapping of Logical address to Physical
Address
• Set of logical addresses used
by the program is called Main Memory
phy_max
logical address space
– For example, [0,
max_address]
limit Program
• Logical address space has to logic_max
be mapped somewhere in
physical memory. logical base
address Program
space
0
0
70
Memory-Management Unit (MMU)
71
Comparison
73
Base and Limit Registers
75
Base and Limit Registers
76
Hardware Address Protection with Base
and Limit Registers
77
Address Binding
• Address Binding is the process of mapping logical addresses to
physical addresses.
• This can happen at different stages:
– Compile-Time Binding
– Load-Time Binding
– Execution-Time Binding (Dynamic Binding)
78
Binding of Instructions and Data to
Memory
Compile-Time Binding
• Happens during compilation if the memory location is known in
advance.
• The absolute physical address is assigned to variables and
instructions.
• No relocation is possible after compilation.
• Used in embedded systems or simple programs with fixed memory
locations.
• Example: If a variable is assigned to memory address 1000 during
compilation, it will always use that address.
• Limitation: The program must always load into the same memory
location.
79
Binding of Instructions and Data to
Memory
Load time binding
• Occurs when the program is loaded into memory.
• The logical addresses are converted to physical addresses by the
loader.
• Relocation is possible → The OS can load the program into any
available memory block.
• Example: If a program is compiled with relative addresses, the
loader assigns actual physical addresses when the program is
loaded into RAM.
• Advantage: Allows more flexibility than compile-time binding.
• Limitation: Once loaded, the program cannot change its memory
location.
80
Binding of Instructions and Data to
Memory
Execution time
• Happens during program execution.
• Logical addresses are converted to physical addresses dynamically
using the Memory Management Unit (MMU).
• Used in modern operating systems with virtual memory (paging,
segmentation).
• Example: In a system with paging, a logical address like 0x0020 is
translated dynamically to a physical address in RAM.
• Advantage:
– Supports dynamic relocation (processes can move in memory).
– Enables efficient memory utilization (swapping, virtual memory).
• Limitation:
– Requires hardware support (MMU, page tables).
– Slower than compile-time and load-time binding due to real-time
81 address translation.
Multistep Processing of a User
Program
Multistep Processing of a User
Program
• User programs go through several
steps before being run.
• Program components do not
necessarily know where they will
be loaded in memory.
• Memory deals with absolute
addresses.
• Logical addresses need to be
bound to physical addresses at
some point.
82
Dynamic relocation using a relocation
register
Dynamic Loading
§ Dynamic Loading is a memory management technique where
program modules are loaded into memory only when needed
instead of loading the entire program at once.
§ This helps in efficient memory utilization and reduces initial load
time.
84
Hardware Support for Relocation
and Limit Registers
• Relocation registers used to protect user processes from each other, and
from changing operating-system code and data
• Relocation register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical
address must be less than the limit register
• Context switch
• MMU maps logical address dynamically
85
Swapping
86
Swapping
87
Swapping
Advantages of Swapping
• Increases CPU Utilization – Keeps the CPU busy by ensuring that
ready processes are always in memory.
• Supports Multiprogramming – More processes can be managed even
with limited RAM.
• Allows Execution of Large Programs – Programs larger than physical
memory can still run.
Disadvantages of Swapping
• High Disk I/O Overhead – Frequent swapping increases read/write
operations, slowing performance.
• Increased Context Switching Time – Moving processes in and out
takes time.
• Thrashing – If swapping occurs too frequently, system performance
88 drops significantly.
Schematic View of Swapping
89
Swapping: Example
90
Swapping: Example
Step 1: Initial State (Before Swapping)
• RAM (256 MB) Process
• 100 MB P1 (Running)
• 120 MB P2 (Waiting)
• 36 MB Free Space
• Disk (Swap Area) P3 (Waiting, 140 MB)
• P3 cannot fit into memory because only 36 MB is free.
• The OS swaps out P1 or P2 to make room for P3.
91
Swapping: Example
Step 2: Swapping Out P1 to Disk
• RAM (256 MB) Process
• 120 MB P2 (Running)
• 140 MB P3 (Loaded from Disk, Running)
• Disk (Swap Area) P1 (Swapped Out, 100 MB)
• P1 is moved to disk (swap space) to free 100 MB of RAM.
• P3 is loaded into memory and starts running.
92
Swapping: Example
Step 3: Swapping Back P1 When Needed
• If P1 needs to run again, the OS:
• Swaps out P2 or P3 to disk.
• Brings back P1 into memory.
• RAM (256 MB) Process
• 100 MB P1 (Running)
• 140 MB P3 (Waiting)
• Disk (Swap Area) P2 (Swapped Out, 120 MB)
• P2 is now swapped out, and P1 is brought back into RAM.
93
Paging
• Paging is a memory management technique that allows a process to
be stored in non-contiguous memory locations, avoiding
fragmentation.
• The main memory (RAM) is divided into fixed-size blocks called
frames, and the process is divided into same-size blocks called
pages.
94
Paging
• Logical Address or Virtual Address (represented in bits):
– An address generated by the CPU.
95
Paging
Example:
• If Logical Address = 31 bit
– Logical Address Space = 231 Bytes = 2 GB (1 GB = 230)
96
Paging
How Paging Works?
• The process is divided into fixed-size pages (e.g., 4 KB each).
97
Paging
98
Paging
Main Memory
Program
0
0
Logical Address Space
1 a frame
1 (size = 2x)
2
2
3
3
4 Physical memory:
4
5 set of fixed sized
5 frames
7
Program: set of pages
6
8
Page size = Frame size
9
99
Paging
Memory
Program
0
0
1 0
1
load 2 2
2
3
3 P# F#
4 1
4 0 1
1 4 5
5
2 2 6 3
3 6 7 5
4 9 8
5 7
10 9 4
Page Table
0
Paging
Address generated by CPU is divided into
• Page number (p):
– specifies a page of the process from which data is to be read.
– represented by # of bits required for Logical Address Space.
– Used as an index to the page table, which contains address of frame in main
memory.
• Page offset (d):
– specifies the word/byte on the page that is to be accessed.
– Represented by # of bits required for particular word/byte in a page.
• If Logical Address is represented by m bits. Then logical address space
is 2m bytes. Assume page size is 2n bytes.
page number page offset
p d
10 (m – n) bits n bits
1 m bits
Paging
Physical Address is divided into
• Frame number(f):
– specifies the frame where the required page is stored.
– Represented by # of bits required for Physical Address Space.
– Stored as value in page table
• Frame offset(d):
– specifies the word that has to be accessed from that frame.
– Represented by # of bits required to represent particular word in a frame
• Similarly,if Physical Address is represented by m bits. Then physical
address space is 2m bytes. Assume frame size is 2n bytes.
10
3
Problem: Paging
• Assuming a 1 KB page size, what are the page numbers and offsets for the
following address references (provided as decimal numbers):
a) 2375
b) 19366
c) 30000
d) 256
e) 16385
10
4
Paging
Solution:
– Page size = 2n = 1024 B = 210 B
– So, # of bits in offset part =10
Solution steps:
1. Convert logical address: Decimal → Binary
2. Split binary address to 2 parts (page #, Offset), offset : n digits
3. Convert offset & page#: Binary → Decimal
10
5
Solution: Paging
10
6
Problem: Paging
Consider a logical address space of 64 pages of 1024 words each, mapped onto
a physical memory of 32 frames.
10
7
Solution: Paging
Method1:
a)
Let m be the # of bits to represent the logical address
There are 64 pages of 1024 words each in logical address space
10
8
Solution: Paging
b)
Let m is number of bits in the physical address
There are 32 frames of 1024 words each in physical address space
10
9
Example: Address Translation
LA = 5
page size = 4 bytes PA = ?
= 22 5 is 0101
Page # = 01
Offset = 01
4 bit logical address 01 (1) → (6) 110
32 byte memory
PA =11001
LA = 11
PA = ?
11 is 1011
Page # = 10
page offset Offset = 11
number (dispacement) 10 (2) → (1)01
Inside page PA = 00111
LA = 13
PA = ?
11 13 is 1101
0 PA = 01001
Example: Address Translation
Frame #
m=3; 23 = 8 logical addresses 2 bits for offset 0000
n=2; page size = 22 = 4 0001 00
page #
0010
000 A 0011
001 B 0100
1 bit for page# 0 010 C 0101
011 D 01
0110
100 E 0111
1 101 F 1000 E
110 G 1001 F
111 H 10
Page Table 1010 G
Logical Memory 1011 H
0 11 1100 A
1 10 1101 B 11
Each entry is used to map 1110 C
4 addresses (page size) 2 bits for frame# 1111 D
11 Physical Memory
1
Numerical: Paging
Given: (Assumption memory is Byte Addressable)
MM=64 MB
LA 32 bit
Page size = 4 KB
Compute the total space wasted in maintaining the page table.
11
2
Numerical: Paging
Solution:
MM size = 64 MB
Since it is Byte addressable, there are 64 MB/1 B = 64 M addresses
64 M = 26 × 220 = 226 addresses → 26 bits are used to represent physical address.
11 Table size = 1 M × 2 B = 2 MB
3
Numerical: Paging
Given: (Assumption memory is Byte Addressable)
MM=256 MB
LA 40 bit
Page size = 4 KB
Process size = 4 MB
Compute the total space wasted in maintaining the page table for the process.
11
4
Numerical: Paging
Solution:
MM size = 256 MB
Since it is Byte addressable, there are 256 MB/1 B = 256 M addresses
256 M = 28 × 220 = 228 addresses → 28 bits are used to represent physical address.
Page size = 4 KB = 22 × 210 B = 212 B
i.e. 12 bits are used to represent an offset value.
28
Physical address = 28 bits
f d
16 12
40
Logical address = 40 bits
p d 16 bits ≈ 2 B
28 12
# of pages in process 4 MB/4 KB =1 K
So, there will be 1 K entries in the page table
Each entry in page table requires 16 bits = 2 B (frame)
11
5 Table size for process = 1 K × 2 B = 2 KB
Numerical: Paging
Given: Size of page Table, compute size of the process (Assumption memory is
Byte Addressable)
SM = 256 GB
MM=512 KB
Page size = 2 KB
Page Table size = 8 KB
Compute size of the process.
11
6
Numerical: Paging
Solution:
Page size = 2 KB = 2 × 210 B = 211 B i.e. 11 bits are used to represent an offset value.
Secondary memory size = 256 GB
Since it is Byte addressable, there are 256 GB/1 B = 256 G addresses
256 G = 28 × 230 = 238 So, 38 bits are used to represent logical address.
38
Logical address = 38 bits
p d
27 11
MM size= 512 KB, So # of addresses = 512 K
512 K = 29 × 210 = 219 (19 bits used to represent PA)
Frames represented by (19-11) = 8 bits 19 8 bit =1 B
f d
One entry in page table is of 8 bits = 1 B 8 11
let # of entries in a page table = x,
So x × 1 B = 8 KB (Table size, given)
11 x = 8 KB/1 B = 8 K (# of entries in page table)
7 Process has 8 K pages, each page of size 2 KB, size of process = 8 K × 2 KB = 16 MB
Implementation of Page Table
• Page table is kept in main memory
– Page-table base register (PTBR) points to the page table
– Page-table length register (PTLR) indicates size of the page table
11
8
Implementation of Page Table
Main Memory
CPU Program P1 Program P2
PC
Currently running
process is process 1 (P1) PCB1 PCB2
11
9
Disadvantage of Paging
• In this scheme every data/instruction access requires two memory
accesses.
– One for the page table (to get the frame #)
– one for the data/instruction (i.e. word from the page)
• Increases the effective access time due to increased number of memory
accesses.
• Two memory access problem can be solved by the use of a special fast-
lookup hardware cache called
– associative memory or
– translation look-aside buffers (TLBs)
12
0
Translation Lookaside Buffer (TLB)
Translation Lookaside Buffer (TLB) is a solution that tries to reduce the
effective access time.
• Being a hardware, the access time of TLB is very less as compared to the main
memory.
12
1
Translation Lookaside Buffer (TLB)
• H/w implementation of page table: done by using dedicated registers.
– usage of register for the page table is satisfactory only if page table is small.
• If page table contain large number of entries then can use TLB, a special,
small, fast look up hardware cache.
12
3
Translating Logical Address into
Physical Address
In a paging scheme using TLB,
• The logical address generated by the CPU is translated into the physical
address using following three steps
– Step-01: CPU generates a logical address consisting of two parts:
• Page Number,
• Page Offset
12
4
Translating Logical Address into
Physical Address
– Step-02: TLB is checked to see if it contains an entry for the referenced page
number.
– The referenced page number is compared with the TLB entries all at once.
– Now, two cases are possible:
• Case-01: If there is a TLB hit
• If TLB contains an entry for the referenced page number, a TLB hit occurs.
• In this case, TLB entry is used to get the corresponding frame number for the
referenced page number.
• Case-02: If there is a TLB miss
• If TLB does not contain an entry for the referenced page number, a TLB miss
occurs.
• In this case, page table is used to get the corresponding frame number for the
referenced page number.
• Then, TLB is updated with the page number and frame number for future
references.
12
5
Translating Logical Address into
Physical Address
– Step-03:
• After the frame number is obtained, it is
combined with the page offset to
generate the physical address.
• Then, physical address is used to read
the required word from the main
memory.
12
6
TLB: Example
12
7
TLB
Important Points:
• Point-01:
– Unlike page table, there exists only one TLB in the system.
– Whenever context switching occurs, the entire content of TLB is flushed and
deleted.
– TLB is then again updated with the currently running process.
• Point-03:
– Time taken to update TLB after getting the frame number from the page table is
12 negligible.
8 – TLB is updated in parallel while fetching the word from the main memory.
TLB
Advantages: using TLB
• TLB reduces the effective access time.
• Only one memory access is required when TLB hit occurs.
Disadvantages:
• After some time of running the process, when TLB hits increases and process starts
to run smoothly.
• when a context switching occurs, the entire content of the TLB is flushed.
• TLB is again updated with the currently running process.
• This happens again and again.
Other disadvantages:
• TLB can hold the data of only one process at a time.
• When context switches occur frequently, the performance of TLB degrades due to
12 low hit ratio.
• As it is a special hardware, it involves additional cost.
9
Effective Access Time (EAT)
• Hit ratio: percentage of times that a page number is found in the
associative registers;
• Let hit ratio =
• Effective Access Time (EAT)
EAT = Hit ratio × (Access time of TLB + Access time of MM) +
miss ratio × (Access time of TLB + 2 × Access time of MM)
13
0
EAT
How the average access time has improved?
Let
MM access time = 400 ms
TLB access time = 50 ms
H (hit ratio) = 90%
i.e. 90 out of 100 the data is found in TLB
13
1
EAT: Solution
Without using TLB (only paging)
• Every access to information/data will access main memory (MM) twice
• Once for page table, which is in MM
• Then actual information/data from MM
• Access time 2 × 400 = 800 ms
13
3
EAT: Numerical
Solution
Given
TLB access time = 10 ns
Main memory access time = 50 ns
TLB Hit ratio = 90% = 0.9
13
5
EAT: Numerical
Solution
Given
Effective access time = 160 ns
Main memory access time = 100 ns
TLB Hit ratio = 60% = 0.6
13
7
EAT: Solution
Answer:
a)
memory reference time= 200+200= 400 ns
( 200 ns to access the page table in RAM and 200 ns to access the
word in memory)
b)
Case (1) : page entry found in associative registers (part1)
Memory access time = 0+200=200 ns
( 0 ns to access the page table in associative registers and 200 ns to
access the word in memory)
13
8
EAT: Solution
Answer:
b)
Case (2) : page entry NOT found in associative registers (part1) but
found in page table in MM
Memory access time = 0+200+200=400 ns
( 0 ns to access the page table in associative registers (part1) , 200 ns
to access the page table (part2) in RAM and 200 ns to access the
word in memory)
14
0
EAT: Solution
Answer:
• In the case that the page is found in the TLB (TLB hit) the total time
would be the time of search in the TLB plus the time to access memory
• TLB_hit_time = TLB_search_time + memory_access_time
• In the case that the page is not found in the TLB (TLB miss) the total
time would be the time to search the TLB plus the time to access
memory to get the page table and frame, plus the time to access memory
to get the data.
• TLB_miss_time := TLB_search_time + memory_access_time +
memory_access_time
14
1
EAT: Solution
Answer:
Effective Access Time
EAT := TLB_miss_time × (1- hit_ratio) + TLB_hit_time × hit_ratio.
14
3
Multilevel Paging
14
5
Multilevel Paging
Need:
The need for multilevel paging arises when
• Size of page table > frame size
• As a result, the page table cannot be stored in a single frame in main memory
14
6
Multilevel Paging
Working:
In multilevel paging
• If page table size > frame size, then
– PT is further divided into several parts.
• Size of each part is same as frame size except possibly the last part.
• Pages of page table stored in different frames of MM.
• To keep track of frames storing the pages of the divided page table
– another page table is maintained.
• As a result, hierarchy of page tables get generated.
• Multilevel paging is done till the level is reached where the entire page
table can be stored in a single frame.
14
7
Two-Level Page-Table Scheme
14
8
Two-Level Paging: Example
• A logical address 32-bit
• Page size 4 KB
• PTE = 4 B
14
9
Two-Level Paging: Example
LA = 32 bit
Process size = 232 B
Page size = 4 KB = 22×210 B = 212 B
# of pages = 232 B/ 212 B = 220 = 1 M
Page size
4 KB
15
0
Two-Level Paging: Example
1 M entries in the PT LA = 32 bit
Table size = 1 M×4 B = 4 MB Process size = 232 B
Size(PT) > size(page) Page size = 4 KB = 22×210 B = 212 B
So, PT requires more pages # of pages = 232 B/ 212 B = 220 = 1 M
PTE size PT Page size
4B 4 KB
15
1
Two-Level Paging: Example
To keep PT in MM, another PT 1 M entries in the PT LA = 32 bit
will be maintained Table size = 1 M×4 B = 4 MB Process size = 232 B
# of entries in outer PT= 210 Size(PT) > size(page) Page size = 4 KB = 22×210 B = 212 B
Size of outer PT = 210×4 B=4 KB So, PT requires more pages # of pages = 232 B/ 212 B = 220 = 1 M
Now outer PT can be stored in Page size
PTE size PT
one frame.
Now, stop paging of PT 4B 4 KB
Outer PT
4B
# of pages for PT
= 222/212 = 210
Size of outer PT requires 1024
= 210×4 B=4 KB frames in MM,
= page size which is very
large
15
2
Two-Level Paging: Example
page number page offset
• CPU will generate LA
• LA has two parts: page #, offset p1 p2 d
• Page # further divided into p1 & p2.
10 10 12
PT Process
PT Base Register
Outer PT
15
4
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 00
01
15 single level two-level 10
Page table page table 11
5
Illustration of Multilevel Paging
15
6
Illustration of Multilevel Paging
# of Frames in MM
# of Frames in MM =
Size (MM) / Frame size = 16 TB / 4 KB = 244 B/ 212 B = 232 frames
# of bits to represent frame number = 32 bits = 4 B (PTE)
7
Illustration of Multilevel Paging
# of Pages of Process
# of pages the process is divided
= Process of size / Page size = 4 GB / 4 KB = (22 × 230) / (22 × 210) = 220 pages
Inner page table keeps track of the frames storing the pages of process.
Observations:
• Size of inner page table > frame size (4 KB).
• Thus, inner page table cannot be stored in a single frame.
15 • So, inner page table has to be divided into pages.
8
Illustration of Multilevel Paging
# of Pages of Inner Page Table
= Inner page table size / Page size = 4 MB / 4 KB = (22 × 220)/(22 × 210) = 210 pages
• 210 pages of inner page table stored in different frames of the main memory.
• Outer page table will keep track of the frames storing pages of inner page
table.
15
9
Illustration of Multilevel Paging
Observation:
• Size of outer page table is same as frame size (4 KB).
• Thus, outer page table can be stored in a single frame.
• So, for given system, there will be two levels of page table.
• Page Table Base Register (PTBR) will store the base address of the outer page
table.
16
0
Two level page table: Example
Given:
logical address = 32 bits
used 8 MB
Page size = 4096 B
logical address division: 10, 10, 12
16
1
Two level page table: Example
10 10 12
• Each entry of a second level page
table translates a page# to a frame#;
• i.e. each entry maps a page which is
4096 bytes (page size)
210
entries • There are 210 =1024 entries in a
210 second level page table
entries
……
210 • A second level page table can map
entries 210 × 212 = 222 = 4 MB
Top level
page table of logical address space
Second level
16 page table
2
Two level page table: Example
16
3
Two level page table: Example
2nd level page tables
210
8 MB
entries
210 4 KB (outer) +
entries 5 × 4 KB (inner)
…. = 24 KB
210 space needed
unused
232 bytes entries 210 to hold
= 4 GB entries the page
tables of the
210
top level process
entries
page table
12 MB 210
entries
16
4
Two level page table: Example
16
5
Solution: Two level page table
Base address of these tables are stored in page table [second last level].
Size of page table [second last level]
= 222 × 22 B
= 224 B
16
7
Solution: Two level page table
Base address of these tables are stored in page table [third last level]
Size of page table [third last level]
= 211 × 22 B
= 213 B
= page size
16
8 3 levels are required.
Three-level Paging Scheme
16
9
Hashed Page Tables
• A hashed page table is a page table structure used to employs a hash table
to map virtual addresses to physical addresses.
• This structure is particularly useful in systems with large address spaces,
such as 64-bit architectures (> 32 bits), where traditional multi-level page
tables can be inefficient due to their size.
17
0
Hashed Page Tables
• Hashing Virtual Page Number (VPN)
• The virtual page number (VPN) is extracted from the virtual address.
• This VPN is passed through a hash function to determine an index in the hash
table.
17
2
Hashed Page Table
17
3
Hashed Page Tables
Advantages:
• Efficient for large address spaces: Works well for 64-bit systems where multi-level
page tables become too large.
• Reduces memory overhead: Instead of keeping a large page table, only entries for
mapped pages exist in the hash table.
• Handles sparse address spaces well: Ideal for workloads where only a small
fraction of virtual memory is actively used.
Disadvantages:
• Hash collisions cause extra lookups: If multiple VPNs hash to the same index, a
linked list traversal is required, slowing down lookup speed.
• Slower than direct indexing: Compared to hierarchical page tables, searching a
linked list is less efficient.
• Complex implementation: Requires maintaining a hashing function and linked
17 lists, increasing management overhead.
4
Inverted Page Table
• An Inverted Page Table (IPT) is a space-efficient page table structure
used in memory management.
• Unlike traditional page tables, which store an entry for each virtual page,
an IPT contains only one entry per physical page frame.
• This significantly reduces the memory required for page tables,
especially in large virtual address spaces (e.g., 64-bit systems).
17
5
Inverted Page Table
• Single Entry per Physical Frame:
• Each entry in the IPT corresponds to a single physical page frame.
• Instead of mapping VPN → PFN (Virtual Page Number → Physical Frame
Number) like traditional page tables, IPT stores:
• Virtual Page Number (VPN)
• Process ID (PID) (to differentiate between processes)
• Control bits (valid, dirty, protection, etc.)
• Translation Process:
• The virtual page number (VPN) and process ID (PID) are used to search the
IPT.
• A linear or hashed search is used to locate the matching entry.
• Once found, the corresponding physical frame number (PFN) is used to
generate the physical address.
17
6
Inverted Page Table
• Page Lookups using Hashing:
• To avoid slow linear searches, hashing techniques are often used for quick
lookup.
• The VPN + PID is hashed to locate the IPT entry quickly.
§ Example: A process of size 2 GB with Page size = 512 Bytes, Size of page
table entry = 4 Bytes, then #of pages in the process = 2 GB / 512 B = 222
PageTable Size = 222 × 22 = 224 bytes
17
7
Inverted Page Table Architecture
17
8
Example: Inverted Page Table
If the virtual address space supported is 264 bits
• page size is 1K = 210 B
• size of the MM is 64K = 26 × 210 = 216 B
• size of a PTE is 2 B, and
• addressing is at the byte level
Calculate the size of the page table required for both standard and inverted
page tables.
17
9
Example: Inverted Page Table
Standard page table:
• Address space = 264 bits = 261 B
– 61 bits in byte addressing
• # of pages = 261 / 1K = 251
• Page Table Size = 251 × (PTE size) = 251 × 2 B = 252 B
18
0
Inverted Page Table
Advantages:
• Memory Efficient: Instead of storing entries for all virtual pages, it stores only
one entry per physical frame.
• Good for Large Address Spaces: Ideal for 64-bit architectures, where traditional
page tables become too large.
• Improved Security: Since entries are mapped per physical frame and include a
PID, process isolation is enhanced.
Disadvantages:
• Slow Address Translation: Searching for a VPN-PID pair is slower than direct
indexing in a hierarchical page table.
• Complex Lookup Mechanism: Requires hashing or searching since the table
isn’t indexed by VPN.
18 • Difficult to Implement Page Sharing: Traditional page tables allow shared
1 memory between processes more easily.
Segmentation
• Another non-contiguous memory allocation technique.
• Segmentation is a memory management technique that divides a
process's address space into multiple variable-sized segments based on
logical divisions such as code, data, stack, heap, etc.
• Unlike paging, which divides memory into fixed-size blocks (pages),
segmentation is based on logical units that vary in size.
18
2
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
18
3
User’s View of a Program
18
4
Logical View of Segmentation
4
1
3 2
4
5
Segmentation
Key Concepts of Segmentation:
• Logical Division of Memory:
– A program is divided into multiple segments (e.g., code segment, data
segment, stack segment).
– Each segment has a name (or number) and a length.
• Segment Table:
• The OS maintains a segment table for each process, containing:
– Segment Number (ID): Unique identifier for each segment.
– Base Address: Starting physical address of the segment.
– Limit (Size): Length of the segment (to prevent accessing beyond allocated
memory).
18
6
Segmentation
Key Concepts of Segmentation
• Address Translation in Segmentation:
• A logical address consists of:
– Segment Number (S)
– Offset (D) within the segment
• The segment number (S) is used to look up the base address (B) from
the segment table.
• The physical address (PA) is computed as:
– PA=Base Address+D
• The OS ensures that D < Limit, preventing out-of-bounds access.
18
7
Segmentation Hardware
18
8
Segmentation
Example of Segmentation
Segment No. Base Address Limit (Size)
0 (Code) 5000 1000
1 (Data) 8000 500
2 (Stack) 9000 300
18
9
Logical-to-Physical Address Translation in
segmentation
19
0
Example of Segmentation
19
1
Segment Table
• Segment table: stores the information about each segment of the process.
– It has two columns.
– First column stores the base address of the segment in the main memory.
– Second column stores length of the segment.
– Segment table is stored as a separate segment in the main memory.
– Segment table base register (STBR) stores the base address of the segment
table.
19
2
Segment Table
3
Illustration: Segmentation
Problem: Consider the following segment table
Segment No. Base Length
0 1219 700
1 2300 14
2 90 100
3 1327 580
4 1952 96
Which of the following logical address will produce trap addressing error?
A. 0, 430
B. 1, 11
C. 2, 100
D. 3, 425
E. 4, 95
19
4 Calculate the physical address if no trap is produced..
Illustration: Segmentation
In a segmentation scheme, the generated logical address consists of two parts
• Segment Number
• Segment Offset
We know
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset greater or equal to the limit of segment,
– then trap addressing error is produced.
19
5
Illustration: Segmentation
Option-A: 0, 430
Here,
Segment Number = 0
Segment Offset = 430
We have,
In the segment table, limit of segment 0 is 700.
Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]
Now, Since generated segment offset lies in the range, so request generated
is valid.
We have,
In the segment table, limit of segment-1 is 14.
Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]
Now, Since generated segment offset lies in the range, so request generated
is valid.
We have,
In the segment table, limit of segment-2 is 100.
Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]
Now,
Since generated segment offset does not lie in the range, so request
generated is invalid.
We have,
In the segment table, limit of segment-3 is 580.
Thus, segment offset must always lie in the range = [0, 580-1] = [0, 579]
Now,
Since generated segment offset lies in the range, so request generated is
valid.
Therefore, no trap will be produced.
Physical Address = 1327 + 425 = 1752
19
9
Illustration: Segmentation
Option-E: 4, 95
Here,
Segment Number = 4
Segment Offset = 95
We have,
In the segment table, limit of segment-4 is 96.
Thus, segment offset must always lie in the range = [0, 96-1] = [0, 95]
Now,
Since generated segment offset lies in the range, so request generated is
valid.
20
1
Pros and Cons
Disadvantages:
• External fragmentation: Over time, free memory is scattered, leading to
inefficient allocation.
• Variable-size allocation is complex: Requires compaction (memory
defragmentation) to optimize usage.
• Slower than paging: Due to variable sizes, memory lookup is more complex
compared to fixed-size pages.
20
2
Memory Management Techniques
Technique Description Strengths Weaknesses
Fixed Main memory is divided into a number of Simple to Inefficient use of
Partitioning static partitions at system generation time. implement; memory due to
internal
A process may be loaded into a partition of little operating fragmentation;
equal or greater size. system overhead. maximum number
of active processes
is fixed.
Dynamic Partitions are created dynamically, so that No internal Inefficient use of
Partitioning each process is loaded into a partition of fragmentation; processor due to
exactly the same size as that process. the need for
more efficient compaction to
use of main counter external
memory. fragmentation.
Simple Main memory is divided into a number of No external A small amount of
Paging equal-size frames. Each process is divided into fragmentation. internal
a number of equal-size pages of the same fragmentation.
length as frames. A process is loaded
20 by loading all of its pages into available, not
3 necessarily contiguous, frames.
Memory Management Techniques
Technique Description Strengths Weaknesses
Simple Each process is divided into a No internal Improved memory
Segmentation number of segments. fragmentation. utilization and
A process is loaded by loading all of reduced overhead
its segments into dynamic compared to
partitions that need not be dynamic
contiguous. partitioning.
Virtual- As with simple paging, except that No external Overhead of
Memory it is not necessary to load all of the
fragmentation; complex memory
Paging pages of a process. higher degree of management.
multiprogramming;
Nonresident pages that are needed large virtual address
are brought in later automatically. space.
Virtual- As with simple segmentation, No internal Overhead of
Memory except that it is not necessary to fragmentation, higher complex memory
Segmentation load all of the segments of a degree of management
process. mult iprogramming;
Nonresident segments that are large virtual address
20 needed are brought in later space; protection and
4 automatically. sharing support.
Memory Protection
• Memory protection in paging systems is implemented by associating
protection bits with each page frame in the page table.
• These bits define the access permissions (e.g., read, write, execute) for
processes, ensuring that a process does not access memory that it is not
authorized to use.
20
5
Memory Protection
• Memory protection implemented by associating protection bit with
each frame
20
6
Valid (v) or Invalid (i) Bit In A Page
Table
20
7
Memory Protection
• Page Table Entries (PTEs) Contain Protection Information
• Each page table entry (PTE) includes:
– Frame Number (Physical Page Location)
– Protection Bits (Access Permissions)
– Valid/Invalid Bit (Indicates if page is allocated)
– Dirty Bit (Indicates if page was modified)
– Reference Bit (Used for page replacement algorithms)
• Types of Protection Bits
– Read (R) → If set, the process can read from the page.
– Write (W) → If set, the process can write to the page.
– Execute (X) → If set, the page contains executable instructions.
– Kernel/User Mode Bit → Restricts access based on privilege level:
20 • Kernel Mode: OS processes can access.
8 • User Mode: User processes can access only allowed pages.
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical address
space of all processes
20
9
Shared Pages Example
21
0