Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
113 views210 pages

Slide 7 OS Memory Management 2025

Memory management is a critical function of operating systems that involves allocating and deallocating memory to running programs, ensuring efficient use and preventing issues like memory leaks and fragmentation. It consists of hardware components (like RAM and cache), operating system functions (such as allocation, protection, and garbage collection), and application-level memory management. Key techniques include fixed and dynamic partitioning, with various placement algorithms (best-fit, first-fit, worst-fit) to optimize memory allocation.

Uploaded by

2306160
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views210 pages

Slide 7 OS Memory Management 2025

Memory management is a critical function of operating systems that involves allocating and deallocating memory to running programs, ensuring efficient use and preventing issues like memory leaks and fragmentation. It consists of hardware components (like RAM and cache), operating system functions (such as allocation, protection, and garbage collection), and application-level memory management. Key techniques include fixed and dynamic partitioning, with various placement algorithms (best-fit, first-fit, worst-fit) to optimize memory allocation.

Uploaded by

2306160
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 210

Operating Systems

Memory Management

Alok Kumar Jagadev


Background

Main Memory

CPU cache
instructions
Registers
data Process
program image
in memory

Operating
System
Disk

2
Memory Management
• Memory management is the process of controlling and coordinating
computer memory
– assigning blocks to various running programs and
– managing the deallocation of memory when it is no longer needed

• It ensures efficient use of system memory and prevents memory


leaks, fragmentation, and crashes.

• Memory management requires three key components:


• Hardware
• Operating System
• Programs & Applications

3
Memory Management
• Hardware (Physical Memory):
• RAM (Random Access Memory): Primary memory where active
processes and data reside.
• Cache Memory: Small, high-speed memory that stores frequently
used data for quick access.
• Registers: Ultra-fast storage inside the CPU for immediate processing.
• Hard Disk (Virtual Memory): Acts as an extension of RAM when
physical memory is full (swap space).

4
Memory Management
• Operating System (Memory Manager):
• Allocation & Deallocation: Assigns memory to processes and
reclaims it when no longer needed.
• Memory Protection: Prevents processes from accessing each other's
memory space.
• Paging & Segmentation: Techniques to efficiently manage memory.
• Garbage Collection: Frees unused memory.

• Programs & Applications:


• Efficient Usage: Programs request and release memory as needed.
• Dynamic Memory Management: Uses heap memory via malloc(),
calloc(), free() in C.
• Memory Leaks Prevention: Ensures allocated memory is properly
freed.
5
Memory Management
• In most memory management schemes,
– kernel occupies some fixed portion of main memory and
– rest is shared by multiple processes

6
Memory Management
• In order to manage memory effectively the OS must have
– Memory allocation policies
– Methods to track the status of memory locations (free or
allocated)
– Policies for preempting memory from one process to allocate
to another

7
Memory Management Requirements
• Memory management is intended to satisfy the following
requirements:
− Relocation
− Protection
− Sharing
− Logical organization
− Physical organization

8
Requirements: Relocation
• When programs are loaded into memory, they might not always be
placed in the same location.
• Available memory is generally shared among a number of processes
• Programmer does not know where the program will be placed in
memory when it is executed
• Active processes need to be able to swapped in and out of memory
in order to maximize processor utilization
• The OS ensures that a process can be moved (relocated) in
memory without affecting execution.
• Uses base and limit registers or dynamic address translation (e.g.,
page tables).

9
Requirements: Protection

• When multiple programs are in memory at same time


– chance of one program may write/access to the address space of
another program.
• Prevents unauthorized access to a process’s memory by another
process.
• Implemented using access control bits, memory segmentation, and
paging.
• Uses hardware mechanisms like base-limit registers and software
techniques like address translation.

10
Requirements: Sharing

• Allow several processes to access a common portion of main memory


without compromising protection.
• Achieved through shared memory segments and virtual memory
mapping.
• Example: Multiple programs sharing read-only code (e.g., shared
libraries).

11
Requirements: Logical Organization

• Refers to how memory is structured and managed for efficient


program execution.
• Since programs consist of various components (code, data, stack,
heap), the OS must organize memory logically to ensure smooth
execution.
• Divides memory into logical segments based on program
structure (e.g., code, data, stack).
• Each segment has a base address and a limit (size).
• Supports modular programming by allowing different segments
to be shared or protected separately.

12
Requirements: Physical Organization
• Manages how memory is physically stored (RAM, cache, disk).
• Uses hierarchical memory management:
– registers → cache → RAM → disk.
• Virtual memory allows processes to use more memory than
physically available by swapping data between RAM and disk.

13
Simple Memory Management

§ Simple memory management techniques


§ fixed partitioning
§ dynamic partitioning
§ simple paging
§ simple segmentation
§ Simple memory management techniques are not used in modern
operating systems.
§ But they provide the foundation for understanding virtual memory
and advanced memory management techniques.

14
Memory Management

• Memory Management
– responsible for allocating and managing computer’s main memory
– keeps track of the status of each memory location, either allocated or
free to ensure effective and efficient use of Primary Memory
• There are two Memory Management Techniques:
– Contiguous, and
– Non-Contiguous

15
Memory Management

• In Contiguous Technique, executing process must be loaded


entirely in main-memory
• Contiguous Technique can be divided into:
• Fixed (or static) partitioning
• Variable (or dynamic) partitioning

16
Fixed Partitions

• Memory is divided into non-overlapping partitions at system


startup.
• Partitions can be
– equal-sized (simpler but inefficient) or
– unequal-sized (better memory utilization)
• Process Allocation Rules:
– A process can be loaded into a partition if its size ≤ partition size.
– If all partitions are occupied, the OS may use swapping to remove a
process and load a new one.

17
Fixed Partitions

• Two possibilities:
– Equal size partitioning
– Unequal size Partition

18
Fixed Partitions

Advantages:
• Simple to Implement: Easy to design and manage since partitions
are predefined at system startup.
• Little OS Overhead: The operating system requires minimal effort
to track and allocate partitions.
• Fast Process Allocation: The OS quickly assigns a process to a free
partition without complex algorithms.
• Efficient for Small, Fixed Workloads: Works well in systems
where the number and size of processes are predictable.
• Supports Multiprogramming: Multiple processes can run
simultaneously, improving CPU utilization.

19
Fixed Partitions

Disadvantages:
• Process Size Limitation:
– A process larger than the partition cannot be loaded.
– Solution: Overlays (splitting programs into smaller sections loaded
when needed).
• Inefficient Memory Utilization (Internal Fragmentation):
– A small process still occupies an entire partition, wasting memory.
• External Fragmentation:
– If partitions are unequal-sized, small processes may fill large partitions,
leaving gaps.
• Fixed Degree of Multiprogramming:
– The number of partitions limits how many processes can run
20 concurrently.
Unequal Size Fixed Partitions

• In unequal-size fixed partitioning, memory is divided into partitions


of different sizes to reduce wasted space and improve memory
utilization.
• Processes are assigned to the smallest available partition that can
accommodate them.
There are two ways to assign processes to partitions:
• Use multiple queues
• Use single queue

21
Unequal Size Fixed Partitions

22
Unequal Size Partitions: Multiple queues

Partition-Specific Queues.
• Each partition has its own queue, where processes wait for a
partition of matching size.
• Advantage: Minimizes wasted memory by ensuring each process
is assigned to the best-fitting partition.
• Disadvantage: If a partition is full, smaller processes in other
queues may still have to wait, even if a larger partition is free.

23
Unequal Size Partitions: Single queue

Global Queue.
• All processes wait in a single queue, and the first available partition
is assigned.
• Advantage: Ensures better load balancing because all partitions are
utilized.
• Disadvantage: Small processes may be placed in large partitions,
causing internal fragmentation.

24
Dynamic Partitioning

• To address the inefficiencies of fixed partitioning, dynamic


partitioning was developed.
• It provides variable-length partitions, allowing processes to use
only the memory they need.

Key Features of Dynamic Partitioning:


• Partitions are of variable length and number → Adjusted based on
process size.
• Each process is allocated exactly the memory it requires →
Reduces internal fragmentation.
• External Fragmentation occurs → As processes terminate, memory
gaps (holes) form.
• Requires Compaction → The OS periodically shifts processes to
25 combine free memory into a single large block.
Example: Dynamic Partitioning

Operating 128 K Operating Operating


System System System

Process 1 320 K Process 1 320 K

Process 2 224 K
896 K

576 K
352 K

26
Example: Dynamic Partitioning

Operating Operating Operating


System System System

Process 1 320 K Process 1 320 K Process 1 320 K

224 K 224 K Process 4 128 K


Process 2
96 K

Process 3 288 K Process 3 288 K Process 3 288 K

64 K 64 K 64 K

27
Example: Dynamic Partitioning

Operating Operating
System System

320 K Process 2 224 K


96 K
Process 4 128 K Process 4 128 K
96 K 96 K

Process 3 288 K Process 3 288 K

64 K 64 K

28
Dynamic Partitioning

• Advantages
– Efficient Memory Utilization: Each process gets exactly the
memory it needs, reducing internal fragmentation.
– Flexible Partitioning: The number and size of partitions adjust
dynamically based on process requirements.
– Better Multiprogramming: More processes can fit in memory
compared to fixed partitioning.
– No Wasted Space Due to Fixed Partitions: Unlike fixed
partitioning, there is no pre-defined partition size that might be
too large or too small.

29
Dynamic Partitioning

• Disadvantages
– External Fragmentation: Over time, holes (gaps) appear in
memory as processes terminate, making it harder to allocate
large processes.
– Compaction Overhead: To eliminate fragmentation, the OS must
move processes to merge free spaces (compaction), which is
CPU-intensive and time-consuming.
– Increased OS Overhead: Requires dynamic memory management
techniques like allocation tracking and relocation.

30
Placement Algorithm
• Used to decide which free block to allocate a process
• Goal: to reduce usage of compaction (time consuming)
• Possible algorithms:
– Best-fit
– First-fit
– Worst-fit

31
Best-fit Algorithm
• The Best-Fit algorithm finds the smallest available memory block
that is large enough to accommodate a process.

How Best-Fit Works


• Scan the list of free memory blocks (holes).
• Find the smallest block that is big enough to fit the process.
• Allocate the process to that block.
• If extra space remains, create a new free block (hole).

32
Best-fit Algorithm
Advantages:
• Minimizes wasted memory (reduces fragmentation) → Assigns
the tightest-fitting block to avoid large unused gaps.
• More efficient memory usage than First-Fit or Worst-Fit in some
cases.

Disadvantages:
• Can cause excessive external fragmentation → Small leftover
blocks may be too small for future processes.
• Slower allocation → Requires scanning the entire memory to find
the smallest suitable block.

33
First-fit Algorithm
• The First-Fit algorithm scans the memory from the beginning and
assigns the first available block that is large enough to
accommodate a process.

How First-Fit Works


• Scan the list of free memory blocks (from the beginning).
• Find the first block that is large enough to fit the process.
• Allocate the process to that block.
• If extra space remains, create a new free block (hole).

34
First-fit Algorithm
Advantages:
• Faster allocation → Stops searching as soon as it finds a suitable
block (better than Best-Fit in speed).
• Less CPU overhead → Does not scan the entire memory, unlike
Best-Fit.

Disadvantages:
• Leads to external fragmentation → Large free spaces are broken
into smaller unusable fragments.
• Can cause inefficient memory usage → Leaves larger free blocks
unused while creating many small holes.

35
Worst-fit Algorithm
• The Worst-Fit algorithm assigns a process to the largest available
memory block to leave the biggest possible free space for future
allocations.

How Worst-Fit Works


• Scan the list of free memory blocks (holes).
• Find the largest block available.
• Allocate the process to that block.
• If extra space remains, create a new free block (hole).

36
Worst-fit Algorithm
Advantages:
• Reduces external fragmentation → Leaves larger free blocks that
may accommodate future processes.
• Good for large processes → Ensures larger blocks remain
available.

Disadvantages:
• Can waste large memory chunks → Large blocks get broken into
smaller, inefficient pieces.
• Slower allocation → Requires scanning the entire memory to find
the largest block.

37
Example: Placement Algorithm

• Consider six memory partitions of size 200 KB, 400 KB, 600 KB,
500 KB, 300 KB, and 250 KB.
– How would the first-fit, best-fit, and worst-fit algorithms use
these partitions to allot four processes of sizes 350 KB, 220 KB,
450 KB and 480 KB in that order.
– Which algorithm makes the most efficient use of memory?

38
Solution: Placement Algorithm
First fit Algorithm: Fixed free Partitions

350 KB, 220 KB, 450 KB and 480 KB (in order)

• 400 KB is the first free partition, in which the process P1 (350 KB) can be
stored
• (400-350) KB = 50 KB is the internal fragmentation
39
Solution: Placement Algorithm
First fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• 600 KB is the first free partition, in which the process P2 (220 KB) can be
stored
• (600-220) KB = 380 KB is the internal fragmentation
40
Solution: Placement Algorithm
First fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• 500 KB is the first free partition, in which the process P3 (450 KB) can be
stored
• (500-450) KB = 50 KB is the internal fragmentation
41
Solution: Placement Algorithm
First fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• There is no free partition to accommodate the process P4 (480 KB).


• So P4 cannot be loaded

• So the total internal fragmentation = 50 KB + 380 KB + 50 KB = 480 KB


• Total free space available = 200 KB + 300 KB + 250 KB = 750 KB, which
is more than the size of P4 (480 KB)
• So external fragmentation = (200+300+250) KB = 750 KB
42
Solution: Placement Algorithm
Best fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P1 (350 KB) can be stored in 400 KB, 600 KB & 500 KB
• But out of these three partitions, 400 KB is smallest partition which can
accommodate the process P1.
• So, (400-350) KB = 50 KB is the internal fragmentation
43
Solution: Placement Algorithm
Best fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P2 (220 KB) can be stored in 600 KB, 500 KB, 300 KB & 250 KB.
• But out of these four partitions, 250 KB is smallest partition which can
accommodate the process P2.
• So, (250-220) KB = 30 KB is the internal fragmentation.
44
Solution: Placement Algorithm
Best fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P3 (450 KB) can be stored in 600 KB & 500 KB.


• But out of these two partitions, 500 KB is smallest partition which can
accommodate the process P3.
45 • So, (500-450) KB = 50 KB is the internal fragmentation.
Solution: Placement Algorithm
Best fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P4 (480 KB) can be stored in 600 KB partition.


• So, (600-480) KB = 120 KB is the internal fragmentation.
• The total internal fragmentation = 50 KB + 120 KB + 50 KB + 30 KB =
250 KB.
• Since all processes have been accommodated, then there is no external
46 fragmentation.
Solution: Placement Algorithm
Best fit Algorithm:

• Best fit algorithm works best for fixed partitioning method.


• Here, the smallest partition which is big enough to accommodate the
process is chosen.
• So, whichever partition is used, the remaining wasted space (small
amount) is not going to be used again.
• Leads to the less internal fragmentation.

47
Solution: Placement Algorithm
Worst fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P1 (350 KB) can be stored in 400 KB, 600 KB & 500 KB.
• But out of these three partitions, 600 KB largest partition which can
accommodate the process P1.
• So, (600-350) KB = 250 KB is the internal fragmentation.
48
Solution: Placement Algorithm
Worst fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Process P2 (220 KB) can be stored in 400 KB, 500 KB, 300 KB & 250 KB.
• But out of these four partitions, 500 KB is the largest partition which can
accommodate the process P2.
• So, (500-220) KB = 200 KB is the internal fragmentation.
49
Solution: Placement Algorithm
Worst fit Algorithm:

350 KB, 220 KB, 450 KB and 480 KB (in order)

• Now, processes P3 (450 KB) & P4 (480 KB) cannot be accommodated in


any of the partitions.
• Total internal fragmentation = 250 KB + 280 KB = 530 KB.
• Total request = (450+480) KB = 930 KB.
• Total available free space = (200+400+300+250) KB = 1150 KB
• External fragmentation = 1150 KB >= 930 KB (Total free space is more
than the required space). But not available in contiguous manner.
50
Solution: Placement Algorithm
Worst fit Algorithm:

• Here, for any process, it is going to chose the largest partition which can
accommodate the process.
• So, the remaining large size space is wasted.
• Leads to the large internal fragmentation.
• Also, performs worse.

51
Example: Placement Algorithm
• Consider the following snap shot containing 150 KB and 350 KB free
memory partitions (dynamic partitions).
– How would the first-fit, best-fit, and worst-fit algorithms use these partitions to
be allot four processes of sizes 300 KB, 25 KB, 125 KB and 50 KB in that order.
– Which algorithm makes the most efficient use of memory?

52
Solution: Placement Algorithm
First fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 350 KB is the first free partition, in which the process P1 (300 KB) can be
stored
• After allocation of P1, remaining 50 KB free partition is available which
can be used by other processes.

53
Solution: Placement Algorithm
First fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 150 KB is the first free partition, in which the process P2 (25 KB) can be
stored
• After allocation of P2, remaining 125 KB free partition is available which
can be used by other processes.
54
Solution: Placement Algorithm
First fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 125 KB is the first free partition, in which can accommodate process P3


(125 KB)
• 50 KB is the first free partition, in which can accommodate process P4 (50
55 KB).
Solution: Placement Algorithm
Best fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 350 KB is used by the process P1 (300 KB).


• After allocation of P1, remaining 50 KB free partition is available which
can be used by other processes.

56
Solution: Placement Algorithm
Best fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• Among150 KB & 50 KB free partitions, 50 KB smallest partition and large


enough to accommodate process P2 (25 KB).
• After allocation of P2, remaining 25 KB free partition is available which
can be used by other processes.

57
Solution: Placement Algorithm
Best fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• Process P3 (125 KB) will be accommodated in 150 KB free partition.


• After allocation of P3, remaining 25 KB free partition is available which
can be used by other processes.

58
Solution: Placement Algorithm
Best fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)


• Now, Process P4 (50 KB) cannot be accommodated in memory because
there is no contiguous 50 KB free partition available.
• It leads to external fragmentation of 50 KB, because to free space >=
requirement.

59
Solution: Placement Algorithm
Worst fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 350 KB is the only free partition, in which the process P1 (300 KB) can be
stored
• After allocation of P1, remaining 50 KB free partition is available which
can be used by other processes.
60
Solution: Placement Algorithm
Worst fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• Out of 150 KB & 50 KB free partitions, 150 KB is the largest free


partition, in which the process P2 (25 KB) can be stored.
• After allocation of P2, remaining 125 KB free partition is available which
can be used by other processes.

61
Solution: Placement Algorithm
Worst fit Algorithm:

300 KB, 25 KB, 125 KB and 50 KB (in order)

• 125 KB can accommodate process P3 (125 KB)


• 50 KB can accommodate process P4 (50 KB).
• All processes can be loaded in main memory for execution.

62
Example: Placement Algorithm

• Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB,
600 KB (in order).
– How would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)?
– Which algorithm makes the most efficient use of memory?

63
Solution – Placement Algorithm
First-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212 KB is put in 500 KB partition (100 KB, 500-212=288 KB, 200 KB,
300 KB, 600 KB)
417 KB is put in 600 KB partition (100 KB, 288 KB, 200 KB, 300 KB,
600-417=183 KB)
112 KB is put in 288 KB partition 100 KB, 288-112=176 KB, 200 KB,
300 KB, 183 KB)
426KB must wait

64
Solution – Placement Algorithm
Best-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212KB is put in 300KB partition (100 KB, 500 KB, 200 KB, 300-
212=88 KB, 600 KB)
417KB is put in 500KB partition (100 KB, 500-417=83 KB, 200 KB, 88
KB, 600 KB)
112KB is put in 200KB partition (100 KB, 83 KB, 200-112=88 KB, 88
KB, 600 KB)
426KB is put in 600KB partition (100 KB, 83 KB, 88 KB, 88 KB, 600-
426=174 KB)

65
Solution – Placement Algorithm
Worst-fit: 100 KB, 500 KB, 200 KB, 300 KB, 600 KB
212 KB is put in 600 KB partition (100 KB, 500 KB, 200 KB, 300 KB,
600-212=388 KB)
417 KB is put in 500 KB partition (100 KB, 500-417=83 KB, 200 KB,
300 KB, 388 KB)
112 KB is put in 388 KB partition (100 KB, 83 KB, 200 KB, 300 KB,
388-112=276 KB)
426 KB must wait

In this example, best-fit turns out to be the best.

66
Address Types

• Logical address
• Physical address
• Relative address

67
Logical Address Space
Logical Address (Virtual Address)
• The address generated by the CPU when a program runs.
• It is independent of physical memory and needs to be translated
before accessing RAM.
• Used by the user programs and managed by the OS.
• Generated by: The CPU.
• Converted to: Physical address by the Memory Management Unit
(MMU).
• Example: If a program requests memory at address 0x0020, the
CPU generates this as a logical address, which must be translated to
a physical address.

68
Physical Address Space
Physical Address
• The actual address in RAM (main memory) where data is stored.
• Used by the hardware (memory unit) to fetch and store data.
• Generated after address translation from a logical address.
• Example: A logical address 0x0020 might be mapped to a physical
address 0xA020 in RAM.
• Physical Address Space: set of all physical addresses
corresponding to the logical addresses

69
Mapping of Logical address to Physical
Address
• Set of logical addresses used
by the program is called Main Memory
phy_max
logical address space
– For example, [0,
max_address]

limit Program
• Logical address space has to logic_max
be mapped somewhere in
physical memory. logical base
address Program
space

0
0

70
Memory-Management Unit (MMU)

• MMU usually integrated into the processor


• In some systems, it occupies a separate chip
• Maps logical address to physical address

71
Comparison

PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS


Basic Logical address (i.e. virtual Identifies a physical location of
address) does not exist required instruction/data in a
physically. memory.
Address Space Set of all logical addresses Set of all physical addresses
generated by a program’s corresponding to the logical
perspective. addresses in a logical address
space.
Visibility User can view the logical User never view physical address
address of a program of program

Generation Generated by CPU. Computed by MMU.


Access Used as a reference to access User can indirectly access
the physical memory location physical address but not directly
by CPU.
72
Relative Address

• The address of a location relative to a known reference point


(such as the beginning of a program).
• Used when a program is relocatable in memory.
• Helps in dynamic loading and compaction.
• Example: If a program starts at 0x1000 and accesses 0x1020, the
relative address is 0x0020 (offset from the base address).

73
Base and Limit Registers

Base Register (Starting Address Register)


• Stores the starting (lowest) physical address of a process in memory.
• Used to relocate processes in dynamic memory allocation.
• Ensures that a process accesses memory only within its assigned
space.
• Example: If a process is loaded into memory starting at physical
address 4000, the Base Register = 4000.
Limit Register (Size of Allocated Memory)
• Stores the size (range) of a process's allocated memory.
• Prevents a process from accessing memory beyond its allocated
space.
• Used to detect out-of-bounds memory accesses (helps in protection).
• Example: If a process has allocated 1200 bytes, the Limit Register =
74 1200.
Base and Limit Registers

75
Base and Limit Registers

How Base & Limit Registers Work Together


• When a process tries to access memory at Logical Address (LA):
• The OS adds the Base Register value to LA to get the Physical
Address (PA).

• The OS checks if PA is within the allocated range:


• Valid: If (Base Register ≤ PA < Base Register + Limit Register) →
Access Allowed
• Invalid: If PA is out of range → Memory violation (trap/exception)

76
Hardware Address Protection with Base
and Limit Registers

77
Address Binding
• Address Binding is the process of mapping logical addresses to
physical addresses.
• This can happen at different stages:
– Compile-Time Binding
– Load-Time Binding
– Execution-Time Binding (Dynamic Binding)

78
Binding of Instructions and Data to
Memory
Compile-Time Binding
• Happens during compilation if the memory location is known in
advance.
• The absolute physical address is assigned to variables and
instructions.
• No relocation is possible after compilation.
• Used in embedded systems or simple programs with fixed memory
locations.
• Example: If a variable is assigned to memory address 1000 during
compilation, it will always use that address.
• Limitation: The program must always load into the same memory
location.

79
Binding of Instructions and Data to
Memory
Load time binding
• Occurs when the program is loaded into memory.
• The logical addresses are converted to physical addresses by the
loader.
• Relocation is possible → The OS can load the program into any
available memory block.
• Example: If a program is compiled with relative addresses, the
loader assigns actual physical addresses when the program is
loaded into RAM.
• Advantage: Allows more flexibility than compile-time binding.
• Limitation: Once loaded, the program cannot change its memory
location.
80
Binding of Instructions and Data to
Memory
Execution time
• Happens during program execution.
• Logical addresses are converted to physical addresses dynamically
using the Memory Management Unit (MMU).
• Used in modern operating systems with virtual memory (paging,
segmentation).
• Example: In a system with paging, a logical address like 0x0020 is
translated dynamically to a physical address in RAM.
• Advantage:
– Supports dynamic relocation (processes can move in memory).
– Enables efficient memory utilization (swapping, virtual memory).
• Limitation:
– Requires hardware support (MMU, page tables).
– Slower than compile-time and load-time binding due to real-time
81 address translation.
Multistep Processing of a User
Program
Multistep Processing of a User
Program
• User programs go through several
steps before being run.
• Program components do not
necessarily know where they will
be loaded in memory.
• Memory deals with absolute
addresses.
• Logical addresses need to be
bound to physical addresses at
some point.

82
Dynamic relocation using a relocation
register

Dynamic Loading
§ Dynamic Loading is a memory management technique where
program modules are loaded into memory only when needed
instead of loading the entire program at once.
§ This helps in efficient memory utilization and reduces initial load
time.

§ Used in large programs where loading all modules at once may


waste memory.
§ Helps in modular programming, where different parts of a program
are loaded dynamically.
§ Reduces RAM usage by keeping only the necessary parts of a
program in memory.
83
Dynamic relocation using a relocation
register

How Dynamic Loading Works


§ Only the main module (core program) is loaded into memory
initially.
§ When the program needs a specific function or module, it requests
the OS to load it dynamically.
§ The OS loads the required module into memory, allowing
execution.
§ Once execution is complete, the module can be unloaded if no
longer needed.

84
Hardware Support for Relocation
and Limit Registers
• Relocation registers used to protect user processes from each other, and
from changing operating-system code and data
• Relocation register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical
address must be less than the limit register
• Context switch
• MMU maps logical address dynamically

85
Swapping

• Swapping is a memory management technique where a process is


temporarily moved from main memory (RAM) to secondary
storage (disk) and later brought back into memory for execution.
• This allows the OS to execute more processes than the available
physical memory.

Key Features of Swapping:


• Used in multiprogramming systems to maximize CPU utilization.
• Helps the OS manage memory constraints efficiently.
• Enables execution of larger programs than available RAM.

86
Swapping

How Swapping Works?


• The OS selects a process from memory that is idle or low priority.
• The process is copied from RAM to disk (swap space), freeing up
memory.
• A new process is loaded into the freed memory space and executed.
• When the swapped-out process is needed again, it is brought back
into RAM.

87
Swapping

Advantages of Swapping
• Increases CPU Utilization – Keeps the CPU busy by ensuring that
ready processes are always in memory.
• Supports Multiprogramming – More processes can be managed even
with limited RAM.
• Allows Execution of Large Programs – Programs larger than physical
memory can still run.
Disadvantages of Swapping
• High Disk I/O Overhead – Frequent swapping increases read/write
operations, slowing performance.
• Increased Context Switching Time – Moving processes in and out
takes time.
• Thrashing – If swapping occurs too frequently, system performance
88 drops significantly.
Schematic View of Swapping

89
Swapping: Example

• Scenario: Running Multiple Processes with Limited RAM


• Assume a computer system has 256 MB of RAM, and it is
currently running two processes:
– Process P1 → Requires 100 MB
– Process P2 → Requires 120 MB
• Now, a new process P3 (140 MB) arrives, but there’s not enough
memory available (only 36 MB free).

90
Swapping: Example
Step 1: Initial State (Before Swapping)
• RAM (256 MB) Process
• 100 MB P1 (Running)
• 120 MB P2 (Waiting)
• 36 MB Free Space
• Disk (Swap Area) P3 (Waiting, 140 MB)
• P3 cannot fit into memory because only 36 MB is free.
• The OS swaps out P1 or P2 to make room for P3.

91
Swapping: Example
Step 2: Swapping Out P1 to Disk
• RAM (256 MB) Process
• 120 MB P2 (Running)
• 140 MB P3 (Loaded from Disk, Running)
• Disk (Swap Area) P1 (Swapped Out, 100 MB)
• P1 is moved to disk (swap space) to free 100 MB of RAM.
• P3 is loaded into memory and starts running.

92
Swapping: Example
Step 3: Swapping Back P1 When Needed
• If P1 needs to run again, the OS:
• Swaps out P2 or P3 to disk.
• Brings back P1 into memory.
• RAM (256 MB) Process
• 100 MB P1 (Running)
• 140 MB P3 (Waiting)
• Disk (Swap Area) P2 (Swapped Out, 120 MB)
• P2 is now swapped out, and P1 is brought back into RAM.

93
Paging
• Paging is a memory management technique that allows a process to
be stored in non-contiguous memory locations, avoiding
fragmentation.
• The main memory (RAM) is divided into fixed-size blocks called
frames, and the process is divided into same-size blocks called
pages.

• Solves the problem of external fragmentation


• Efficient memory utilization
• Allows large processes to fit into limited memory using virtual
memory

94
Paging
• Logical Address or Virtual Address (represented in bits):
– An address generated by the CPU.

• Logical Address Space or Virtual Address Space (represented in


words or bytes):
– set of all logical addresses generated for a program.

• Physical Address (represented in bits):


– An address actually available on memory unit.

• Physical Address Space (represented in words or bytes):


– set of all physical addresses corresponding to the logical addresses.

95
Paging
Example:
• If Logical Address = 31 bit
– Logical Address Space = 231 Bytes = 2 GB (1 GB = 230)

• If Logical Address Space = 128 MB = 27 × 220 Bytes


– Logical Address = log2 227 = 27 bits

• If Physical Address = 22 bit


– Physical Address Space = 222 Bytes = 4 MB (1 MB = 220)

• If Physical Address Space = 16 MB = 24 × 220 Bytes


– Physical Address = log2 224 = 24 bits

96
Paging
How Paging Works?
• The process is divided into fixed-size pages (e.g., 4 KB each).

• The RAM is divided into frames of the same size as pages.

• Pages of a process are loaded into available frames in memory


(not necessarily contiguous).

• A Page Table is maintained to keep track of which page is stored


in which frame.

• The Memory Management Unit (MMU) translates logical


addresses (page number + offset) to physical addresses (frame
number + offset).

97
Paging

98
Paging
Main Memory
Program
0
0
Logical Address Space
1 a frame
1 (size = 2x)
2
2
3
3
4 Physical memory:
4
5 set of fixed sized
5 frames
7
Program: set of pages
6
8
Page size = Frame size
9
99
Paging
Memory
Program
0
0
1 0
1
load 2 2
2
3
3 P# F#
4 1
4 0 1
1 4 5
5
2 2 6 3
3 6 7 5
4 9 8
5 7
10 9 4
Page Table
0
Paging
Address generated by CPU is divided into
• Page number (p):
– specifies a page of the process from which data is to be read.
– represented by # of bits required for Logical Address Space.
– Used as an index to the page table, which contains address of frame in main
memory.
• Page offset (d):
– specifies the word/byte on the page that is to be accessed.
– Represented by # of bits required for particular word/byte in a page.
• If Logical Address is represented by m bits. Then logical address space
is 2m bytes. Assume page size is 2n bytes.
page number page offset
p d
10 (m – n) bits n bits
1 m bits
Paging
Physical Address is divided into
• Frame number(f):
– specifies the frame where the required page is stored.
– Represented by # of bits required for Physical Address Space.
– Stored as value in page table
• Frame offset(d):
– specifies the word that has to be accessed from that frame.
– Represented by # of bits required to represent particular word in a frame
• Similarly,if Physical Address is represented by m bits. Then physical
address space is 2m bytes. Assume frame size is 2n bytes.

frame number frame offset


f d
10 (m – n) bits n bits
m bits
2
Paging
• Physical Address = 12 bits, then Physical Address Space = 212B = 4 KB
• Logical Address = 13 bits, then Logical Address Space = 213B = 8 KB
• Page size = frame size = 1 KB (assumption)

10
3
Problem: Paging
• Assuming a 1 KB page size, what are the page numbers and offsets for the
following address references (provided as decimal numbers):

a) 2375
b) 19366
c) 30000
d) 256
e) 16385

10
4
Paging
Solution:
– Page size = 2n = 1024 B = 210 B
– So, # of bits in offset part =10

Solution steps:
1. Convert logical address: Decimal → Binary
2. Split binary address to 2 parts (page #, Offset), offset : n digits
3. Convert offset & page#: Binary → Decimal

10
5
Solution: Paging

Logical Logical address Page # Offset Page # Offset


address (binary) (6 bits) (10 bits) (decimal) (decimal)
(decimal) (binary) (binary)
2375 0000 10|01 0100 0111 0000 10 01 0100 0111 2 327
19366 0100 10|11 1010 0110 0100 10 11 1010 0110 18 934
30000 0111 01|01 0011 0000 0111 01 01 0011 0000 29 304
256 0000 00|01 0000 0000 0000 00 01 0000 0000 0 265
16385 0100 00|00 0000 0001 0100 00 00 0000 0001 16 1

10
6
Problem: Paging
Consider a logical address space of 64 pages of 1024 words each, mapped onto
a physical memory of 32 frames.

a) How many bits are there in the logical address?


b) How many bits are there in the physical address?

10
7
Solution: Paging
Method1:
a)
Let m be the # of bits to represent the logical address
There are 64 pages of 1024 words each in logical address space

Logical address space = 2m = # of pages × page size


2m = 64 × 1024
2m = 26 × 210
2m = 216
=> m = 16 bits
Thus, there are16 bits in logical address.

10
8
Solution: Paging
b)
Let m is number of bits in the physical address
There are 32 frames of 1024 words each in physical address space

Physical address space = 2m = # of frames × page size


2m = 32 × 1024
2m = 25 × 210
2m = 215
=> m = 15 bits
Thus, there are15 bits in physical address.

10
9
Example: Address Translation
LA = 5
page size = 4 bytes PA = ?
= 22 5 is 0101
Page # = 01
Offset = 01
4 bit logical address 01 (1) → (6) 110

32 byte memory
PA =11001
LA = 11
PA = ?
11 is 1011
Page # = 10
page offset Offset = 11
number (dispacement) 10 (2) → (1)01
Inside page PA = 00111
LA = 13
PA = ?
11 13 is 1101
0 PA = 01001
Example: Address Translation
Frame #
m=3; 23 = 8 logical addresses 2 bits for offset 0000
n=2; page size = 22 = 4 0001 00
page #
0010
000 A 0011
001 B 0100
1 bit for page# 0 010 C 0101
011 D 01
0110
100 E 0111
1 101 F 1000 E
110 G 1001 F
111 H 10
Page Table 1010 G
Logical Memory 1011 H
0 11 1100 A
1 10 1101 B 11
Each entry is used to map 1110 C
4 addresses (page size) 2 bits for frame# 1111 D
11 Physical Memory
1
Numerical: Paging
Given: (Assumption memory is Byte Addressable)
MM=64 MB
LA 32 bit
Page size = 4 KB
Compute the total space wasted in maintaining the page table.

11
2
Numerical: Paging
Solution:
MM size = 64 MB
Since it is Byte addressable, there are 64 MB/1 B = 64 M addresses
64 M = 26 × 220 = 226 addresses → 26 bits are used to represent physical address.

Page size = 4 KB = 22 × 210 B = 212 B


i.e. 12 bits are used to represent an offset value.
Physical address requires 26 bits 26
f d
Logical address requires 32 bits 14 12 14 bits ≈ 2 B
32
# of pages = 220 = 1 M p d
There will be 1 M entries in page table 20 12
Each entry in page table requires 14 bits (value of f) ≈ 2 B

11 Table size = 1 M × 2 B = 2 MB
3
Numerical: Paging
Given: (Assumption memory is Byte Addressable)
MM=256 MB
LA 40 bit
Page size = 4 KB
Process size = 4 MB
Compute the total space wasted in maintaining the page table for the process.

11
4
Numerical: Paging
Solution:
MM size = 256 MB
Since it is Byte addressable, there are 256 MB/1 B = 256 M addresses
256 M = 28 × 220 = 228 addresses → 28 bits are used to represent physical address.
Page size = 4 KB = 22 × 210 B = 212 B
i.e. 12 bits are used to represent an offset value.
28
Physical address = 28 bits
f d
16 12
40
Logical address = 40 bits
p d 16 bits ≈ 2 B
28 12
# of pages in process 4 MB/4 KB =1 K
So, there will be 1 K entries in the page table
Each entry in page table requires 16 bits = 2 B (frame)
11
5 Table size for process = 1 K × 2 B = 2 KB
Numerical: Paging
Given: Size of page Table, compute size of the process (Assumption memory is
Byte Addressable)
SM = 256 GB
MM=512 KB
Page size = 2 KB
Page Table size = 8 KB
Compute size of the process.

11
6
Numerical: Paging
Solution:
Page size = 2 KB = 2 × 210 B = 211 B i.e. 11 bits are used to represent an offset value.
Secondary memory size = 256 GB
Since it is Byte addressable, there are 256 GB/1 B = 256 G addresses
256 G = 28 × 230 = 238 So, 38 bits are used to represent logical address.
38
Logical address = 38 bits
p d
27 11
MM size= 512 KB, So # of addresses = 512 K
512 K = 29 × 210 = 219 (19 bits used to represent PA)
Frames represented by (19-11) = 8 bits 19 8 bit =1 B
f d
One entry in page table is of 8 bits = 1 B 8 11
let # of entries in a page table = x,
So x × 1 B = 8 KB (Table size, given)
11 x = 8 KB/1 B = 8 K (# of entries in page table)
7 Process has 8 K pages, each page of size 2 KB, size of process = 8 K × 2 KB = 16 MB
Implementation of Page Table
• Page table is kept in main memory
– Page-table base register (PTBR) points to the page table
– Page-table length register (PTLR) indicates size of the page table

11
8
Implementation of Page Table
Main Memory
CPU Program P1 Program P2

PC

PT1 PT2 Kernel


Memory
PTBR Page Page
Table Table
PTLR of of
P1 P2

Currently running
process is process 1 (P1) PCB1 PCB2

11
9
Disadvantage of Paging
• In this scheme every data/instruction access requires two memory
accesses.
– One for the page table (to get the frame #)
– one for the data/instruction (i.e. word from the page)
• Increases the effective access time due to increased number of memory
accesses.

• Two memory access problem can be solved by the use of a special fast-
lookup hardware cache called
– associative memory or
– translation look-aside buffers (TLBs)

12
0
Translation Lookaside Buffer (TLB)
Translation Lookaside Buffer (TLB) is a solution that tries to reduce the
effective access time.
• Being a hardware, the access time of TLB is very less as compared to the main
memory.

Structure: Translation Lookaside Buffer (TLB) consists of two columns


– Page Number
– Frame Number

12
1
Translation Lookaside Buffer (TLB)
• H/w implementation of page table: done by using dedicated registers.
– usage of register for the page table is satisfactory only if page table is small.
• If page table contain large number of entries then can use TLB, a special,
small, fast look up hardware cache.

• TLB: associative, high speed memory.


• Each entry in TLB consists of two parts:
– a tag and
– a value.
• When this memory is used, then an item is compared with all tags
simultaneously.
– If the item is found, then corresponding value is returned.
12
2
Paging Hardware With TLB

12
3
Translating Logical Address into
Physical Address
In a paging scheme using TLB,
• The logical address generated by the CPU is translated into the physical
address using following three steps
– Step-01: CPU generates a logical address consisting of two parts:
• Page Number,
• Page Offset

12
4
Translating Logical Address into
Physical Address
– Step-02: TLB is checked to see if it contains an entry for the referenced page
number.
– The referenced page number is compared with the TLB entries all at once.
– Now, two cases are possible:
• Case-01: If there is a TLB hit
• If TLB contains an entry for the referenced page number, a TLB hit occurs.
• In this case, TLB entry is used to get the corresponding frame number for the
referenced page number.
• Case-02: If there is a TLB miss
• If TLB does not contain an entry for the referenced page number, a TLB miss
occurs.
• In this case, page table is used to get the corresponding frame number for the
referenced page number.
• Then, TLB is updated with the page number and frame number for future
references.
12
5
Translating Logical Address into
Physical Address
– Step-03:
• After the frame number is obtained, it is
combined with the page offset to
generate the physical address.
• Then, physical address is used to read
the required word from the main
memory.

12
6
TLB: Example

12
7
TLB
Important Points:
• Point-01:
– Unlike page table, there exists only one TLB in the system.
– Whenever context switching occurs, the entire content of TLB is flushed and
deleted.
– TLB is then again updated with the currently running process.

• Point-02: When a new process gets scheduled


– Initially, TLB is empty. So, TLB misses are frequent.
– With every access from the page table, TLB is updated.
– After some time, TLB hits increases and TLB misses reduces.

• Point-03:
– Time taken to update TLB after getting the frame number from the page table is
12 negligible.
8 – TLB is updated in parallel while fetching the word from the main memory.
TLB
Advantages: using TLB
• TLB reduces the effective access time.
• Only one memory access is required when TLB hit occurs.

Disadvantages:
• After some time of running the process, when TLB hits increases and process starts
to run smoothly.
• when a context switching occurs, the entire content of the TLB is flushed.
• TLB is again updated with the currently running process.
• This happens again and again.

Other disadvantages:
• TLB can hold the data of only one process at a time.
• When context switches occur frequently, the performance of TLB degrades due to
12 low hit ratio.
• As it is a special hardware, it involves additional cost.
9
Effective Access Time (EAT)
• Hit ratio: percentage of times that a page number is found in the
associative registers;
• Let hit ratio = 
• Effective Access Time (EAT)
EAT = Hit ratio × (Access time of TLB + Access time of MM) +
miss ratio × (Access time of TLB + 2 × Access time of MM)

=  × (Access time of TLB + Access time of MM) +


(1-) × (Access time of TLB + 2 × Access time of MM)

13
0
EAT
How the average access time has improved?

Let
MM access time = 400 ms
TLB access time = 50 ms
H (hit ratio) = 90%
i.e. 90 out of 100 the data is found in TLB

13
1
EAT: Solution
Without using TLB (only paging)
• Every access to information/data will access main memory (MM) twice
• Once for page table, which is in MM
• Then actual information/data from MM
• Access time 2 × 400 = 800 ms

With TLB (check TLB first)


Case1: TLB hit (available in TLB)
0.9 × [50+400] = 0.9 × 450 = 405 ms
Case2: TLB miss (not available in TLB)
0.1 × [50+400+400] = 0.1 × 850 = 85 ms
EAT = 405 ms + 85 ms = 490 ms
In this problem, more or less
TLB removed the disadvantage
13 400 ms 490 ms 800 ms
of access time two times.
2 No paging Paging with TLB Paging without TLB
EAT: Numerical
A paging scheme uses a Translation Lookaside buffer (TLB). A TLB
access takes 10 ns and a main memory access takes 50 ns. What is the
effective access time (in ns) if the TLB hit ratio is 90% and there is no
page fault?
A. 54
B. 60
C. 65
D. 75

13
3
EAT: Numerical
Solution
Given
TLB access time = 10 ns
Main memory access time = 50 ns
TLB Hit ratio = 90% = 0.9

TLB Miss ratio = 1–TLB Hit ratio = 1 – 0.9 = 0.1

Effective Access Time


= 0.9 × [10 ns + 50 ns] + 0.1 × [10 ns + 2 × 50 ns]
= 0.9 × 60 ns + 0.1 × 110 ns
= 54 ns + 11 ns
= 65 ns
13
Thus, Option (C) is correct.
4
EAT: Numerical
A paging scheme uses a Translation Lookaside buffer (TLB). The effective
memory access takes 160 ns and a main memory access takes 100 ns.
What is the TLB access time (in ns) if the TLB hit ratio is 60% and there is
no page fault?
A. 54
B. 60
C. 20
D. 75

13
5
EAT: Numerical
Solution
Given
Effective access time = 160 ns
Main memory access time = 100 ns
TLB Hit ratio = 60% = 0.6

TLB Miss ratio = 1–TLB Hit ratio = 1 – 0.6 = 0.4

Let TLB access time = T ns.


Effective Access Time = 160 ns = 0.6 × [T + 100 ns] + 0.4 × [T + 2 × 100 ns]
160 = 0.6 × T + 60 + 0.4 × T + 80
160 = T + 140
T = 160 – 140
13 T = 20
6 Thus, Option (C) is correct.
EAT: Problem
Consider a paging system with the page table stored in memory.
a) If a memory reference takes 200 nanoseconds, how long does a paged memory
reference take?
b) If we add associative registers, and 75 percent of all page-table references are
found in the associative registers, what is the effective memory reference time?
(Assume that finding a page-table entry in the associative registers takes zero
time, if the entry is there.)

13
7
EAT: Solution
Answer:
a)
memory reference time= 200+200= 400 ns
( 200 ns to access the page table in RAM and 200 ns to access the
word in memory)

b)
Case (1) : page entry found in associative registers (part1)
Memory access time = 0+200=200 ns
( 0 ns to access the page table in associative registers and 200 ns to
access the word in memory)

13
8
EAT: Solution
Answer:
b)
Case (2) : page entry NOT found in associative registers (part1) but
found in page table in MM
Memory access time = 0+200+200=400 ns
( 0 ns to access the page table in associative registers (part1) , 200 ns
to access the page table (part2) in RAM and 200 ns to access the
word in memory)

Effective access time =∑ [probability of the case × access time of this


case]

Effective access time = [0.75 × 200 ]+ [0.25 × 400] = 250 ns.


13
9
EAT: Problem
Problem:
• Consider a paging hardware with a TLB.
• Assume that the entire page table and all the pages are in the physical
memory.
• It takes 10 milliseconds to search the TLB and 80 milliseconds to access
the physical memory.
• If the TLB hit ratio is 0.6, Find the effective memory access time (in
milliseconds).

14
0
EAT: Solution
Answer:
• In the case that the page is found in the TLB (TLB hit) the total time
would be the time of search in the TLB plus the time to access memory
• TLB_hit_time = TLB_search_time + memory_access_time

• In the case that the page is not found in the TLB (TLB miss) the total
time would be the time to search the TLB plus the time to access
memory to get the page table and frame, plus the time to access memory
to get the data.
• TLB_miss_time := TLB_search_time + memory_access_time +
memory_access_time

14
1
EAT: Solution
Answer:
Effective Access Time
EAT := TLB_miss_time × (1- hit_ratio) + TLB_hit_time × hit_ratio.

EAT := (TLB_search_time + 2 × memory_access_time) × (1- hit_ratio)


+ (TLB_search_time + memory_access_time) × hit_ratio.

As both page table and page are in physical memory


T(eff) = hit ratio × (TLB access time + Main memory access time) +
(1 – hit ratio) × (TLB access time + 2 × main memory time)
= 0.6 × (10+80) + (1-0.6) × (10+2 × 80)
= 0.6 × (90) + 0.4 × (170)
14 = 122
2
Structure of the Page Table

• Multilevel Paging /Hierarchical Paging

• Hashed Page Tables

• Inverted Page Tables

14
3
Multilevel Paging

• For implementation of paging, it requires PT


– an extra data structure is used
• Where this page table is stored?
– If PT is small implemented though registers
– If PT is large stored in MM (which happens in most of the cases)
• Keeping the page table in MM the one issue is access time
– first page table is accessed
– then the instruction/data is accessed
• Two memory access is required
– memory access time is increased
• To reduce this problem cache (TLB) is used.

14 • Other issue is space consumed by PT


– if page table is large, it will take considerable amount of space in MM
4
Multilevel Paging

• Multilevel Paging : a paging scheme which consist of two or more levels


of page tables in a hierarchical manner.
• Also known as hierarchical paging:
– Entries of the level 1 page table are pointers to a level 2 page table and
– Entries of the level 2 page tables are pointers to a level 3 page table and so on.
– Entries of the last level page table stores actual frame information.
• Level 1 contain single page table and address of that table is stored in
PTBR (Page Table Base Register).

14
5
Multilevel Paging

Need:
The need for multilevel paging arises when
• Size of page table > frame size
• As a result, the page table cannot be stored in a single frame in main memory

14
6
Multilevel Paging
Working:
In multilevel paging
• If page table size > frame size, then
– PT is further divided into several parts.
• Size of each part is same as frame size except possibly the last part.
• Pages of page table stored in different frames of MM.
• To keep track of frames storing the pages of the divided page table
– another page table is maintained.
• As a result, hierarchy of page tables get generated.
• Multilevel paging is done till the level is reached where the entire page
table can be stored in a single frame.
14
7
Two-Level Page-Table Scheme

14
8
Two-Level Paging: Example
• A logical address 32-bit
• Page size 4 KB
• PTE = 4 B

14
9
Two-Level Paging: Example

LA = 32 bit
Process size = 232 B
Page size = 4 KB = 22×210 B = 212 B
# of pages = 232 B/ 212 B = 220 = 1 M
Page size
4 KB

15
0
Two-Level Paging: Example
1 M entries in the PT LA = 32 bit
Table size = 1 M×4 B = 4 MB Process size = 232 B
Size(PT) > size(page) Page size = 4 KB = 22×210 B = 212 B
So, PT requires more pages # of pages = 232 B/ 212 B = 220 = 1 M
PTE size PT Page size
4B 4 KB

15
1
Two-Level Paging: Example
To keep PT in MM, another PT 1 M entries in the PT LA = 32 bit
will be maintained Table size = 1 M×4 B = 4 MB Process size = 232 B
# of entries in outer PT= 210 Size(PT) > size(page) Page size = 4 KB = 22×210 B = 212 B
Size of outer PT = 210×4 B=4 KB So, PT requires more pages # of pages = 232 B/ 212 B = 220 = 1 M
Now outer PT can be stored in Page size
PTE size PT
one frame.
Now, stop paging of PT 4B 4 KB
Outer PT
4B

# of pages for PT

= 222/212 = 210
Size of outer PT requires 1024
= 210×4 B=4 KB frames in MM,
= page size which is very
large
15
2
Two-Level Paging: Example
page number page offset
• CPU will generate LA
• LA has two parts: page #, offset p1 p2 d
• Page # further divided into p1 & p2.
10 10 12
PT Process
PT Base Register
Outer PT

• Outer page # (p1) acts as an offset


• p1 added to PTBR (points to inner PT)
• Only that page will be loaded in MM
15• Then p2 acts as offset
• p2 is added to get the entry as the address
3 of the frame contains info.
Address-Translation Scheme

15
4
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 00
01
15 single level two-level 10
Page table page table 11
5
Illustration of Multilevel Paging

Consider a system using paging scheme where


• Logical Address Space = 4 GB (Process Size)
• Physical Address Space = 16 TB
• Page size = 4 KB

Find how many levels of page table will be required?

15
6
Illustration of Multilevel Paging

# of bits in Physical Address


Size (MM) = Physical Address Space = 16 TB = 24 × 240 B = 244 B
# of bits in physical address = 44 bits

# of Frames in MM
# of Frames in MM =
Size (MM) / Frame size = 16 TB / 4 KB = 244 B/ 212 B = 232 frames
# of bits to represent frame number = 32 bits = 4 B (PTE)

# of bits in page offset


Page size = 4 KB = 22 × 210 B = 212 B
15 # of bits in page offset = 12 bits

7
Illustration of Multilevel Paging
# of Pages of Process
# of pages the process is divided
= Process of size / Page size = 4 GB / 4 KB = (22 × 230) / (22 × 210) = 220 pages

Inner page table keeps track of the frames storing the pages of process.

Inner Page table size


= # of entries in inner page table × PTE size
= # of pages in the process × PTE size = 220 × 4 B = 4 MB

Observations:
• Size of inner page table > frame size (4 KB).
• Thus, inner page table cannot be stored in a single frame.
15 • So, inner page table has to be divided into pages.
8
Illustration of Multilevel Paging
# of Pages of Inner Page Table
= Inner page table size / Page size = 4 MB / 4 KB = (22 × 220)/(22 × 210) = 210 pages

• 210 pages of inner page table stored in different frames of the main memory.
• Outer page table will keep track of the frames storing pages of inner page
table.

Outer Page Table Size


= # of entries in outer page table × PTE size
= # of pages the inner page table is divided × PTE size
= 210 × 4 B = 4 KB

15
9
Illustration of Multilevel Paging
Observation:
• Size of outer page table is same as frame size (4 KB).
• Thus, outer page table can be stored in a single frame.
• So, for given system, there will be two levels of page table.
• Page Table Base Register (PTBR) will store the base address of the outer page
table.

16
0
Two level page table: Example

Given:
logical address = 32 bits
used 8 MB
Page size = 4096 B
logical address division: 10, 10, 12

What is total size of two level page unused


table if entry size is 4 bytes? 232 B = 4 GB

Size of logical address space


(only partially used) used 12 MB

16
1
Two level page table: Example

10 10 12
• Each entry of a second level page
table translates a page# to a frame#;
• i.e. each entry maps a page which is
4096 bytes (page size)
210
entries • There are 210 =1024 entries in a
210 second level page table
entries
……
210 • A second level page table can map
entries 210 × 212 = 222 = 4 MB
Top level
page table of logical address space

Second level
16 page table
2
Two level page table: Example

8 MB / 4 MB = 2 pages in second level


8 MB required to map
8 MB of logical memory

12 MB / 4 MB = 3 pages in second level


required to map 12 MB of logical memory
232 bytes = 4 GB

Total = 3 + 2 = 5 second level


pages required
12 MB

16
3
Two level page table: Example
2nd level page tables
210
8 MB
entries
210 4 KB (outer) +
entries 5 × 4 KB (inner)
…. = 24 KB
210 space needed
unused
232 bytes entries 210 to hold
= 4 GB entries the page
tables of the
210
top level process
entries
page table
12 MB 210
entries

16
4
Two level page table: Example

• Consider a virtual memory system with


• physical memory of 8 GB,
• a page size of 8 KB and
• 46 bit virtual address.
• Assume: every page table exactly fits into a single page.
• If PTE size is 4 B

• then how many levels of page tables would be required?

16
5
Solution: Two level page table

• Page size = 8 KB = 213 B


• Virtual address space size = 246 B
• PTE = 4 B = 22 B

# of pages or number of entries in page table


= (virtual address space size) / (page size)
= 246 B/213 B
= 233

Size of page table


= (# of entries in page table) × (size of PTE)
= 233 × 22 B
16
= 235 B
6
Solution: Two level page table

To create one more level,

Size of page table > page size


# of pages in next level
= 235 B / 213 B
= 222

Base address of these tables are stored in page table [second last level].
Size of page table [second last level]
= 222 × 22 B
= 224 B
16
7
Solution: Two level page table

To create one more level,


Size of page table [second last level] > page size

# of page tables in second last level


= 224 B/213 B
= 211

Base address of these tables are stored in page table [third last level]
Size of page table [third last level]
= 211 × 22 B
= 213 B
= page size
16
8 3 levels are required.
Three-level Paging Scheme

16
9
Hashed Page Tables
• A hashed page table is a page table structure used to employs a hash table
to map virtual addresses to physical addresses.
• This structure is particularly useful in systems with large address spaces,
such as 64-bit architectures (> 32 bits), where traditional multi-level page
tables can be inefficient due to their size.

17
0
Hashed Page Tables
• Hashing Virtual Page Number (VPN)
• The virtual page number (VPN) is extracted from the virtual address.
• This VPN is passed through a hash function to determine an index in the hash
table.

• Handling Collisions (Linked List of Mappings)


• Since multiple VPNs can hash to the same index, each entry in the hash table
stores a linked list (or chain) of page table entries (PTEs).
• Each PTE in the chain contains:
• Virtual Page Number (VPN)
• Corresponding Physical Frame Number (PFN)
• Pointer to the next entry (for handling hash collisions)
17
1
Hashed Page Tables
• Translation Process
• The CPU generates a virtual address.
• The VPN is hashed and the corresponding hash bucket is accessed.
• The system searches the linked list at that bucket for a match.
• If a match is found, the physical frame number (PFN) is extracted.
• The PFN is combined with the offset to form the final physical address.

17
2
Hashed Page Table

17
3
Hashed Page Tables
Advantages:
• Efficient for large address spaces: Works well for 64-bit systems where multi-level
page tables become too large.
• Reduces memory overhead: Instead of keeping a large page table, only entries for
mapped pages exist in the hash table.
• Handles sparse address spaces well: Ideal for workloads where only a small
fraction of virtual memory is actively used.
Disadvantages:
• Hash collisions cause extra lookups: If multiple VPNs hash to the same index, a
linked list traversal is required, slowing down lookup speed.
• Slower than direct indexing: Compared to hierarchical page tables, searching a
linked list is less efficient.
• Complex implementation: Requires maintaining a hashing function and linked
17 lists, increasing management overhead.
4
Inverted Page Table
• An Inverted Page Table (IPT) is a space-efficient page table structure
used in memory management.
• Unlike traditional page tables, which store an entry for each virtual page,
an IPT contains only one entry per physical page frame.
• This significantly reduces the memory required for page tables,
especially in large virtual address spaces (e.g., 64-bit systems).

17
5
Inverted Page Table
• Single Entry per Physical Frame:
• Each entry in the IPT corresponds to a single physical page frame.
• Instead of mapping VPN → PFN (Virtual Page Number → Physical Frame
Number) like traditional page tables, IPT stores:
• Virtual Page Number (VPN)
• Process ID (PID) (to differentiate between processes)
• Control bits (valid, dirty, protection, etc.)
• Translation Process:
• The virtual page number (VPN) and process ID (PID) are used to search the
IPT.
• A linear or hashed search is used to locate the matching entry.
• Once found, the corresponding physical frame number (PFN) is used to
generate the physical address.
17
6
Inverted Page Table
• Page Lookups using Hashing:
• To avoid slow linear searches, hashing techniques are often used for quick
lookup.
• The VPN + PID is hashed to locate the IPT entry quickly.

§ Example: A process of size 2 GB with Page size = 512 Bytes, Size of page
table entry = 4 Bytes, then #of pages in the process = 2 GB / 512 B = 222
PageTable Size = 222 × 22 = 224 bytes

17
7
Inverted Page Table Architecture

17
8
Example: Inverted Page Table
If the virtual address space supported is 264 bits
• page size is 1K = 210 B
• size of the MM is 64K = 26 × 210 = 216 B
• size of a PTE is 2 B, and
• addressing is at the byte level
Calculate the size of the page table required for both standard and inverted
page tables.

17
9
Example: Inverted Page Table
Standard page table:
• Address space = 264 bits = 261 B
– 61 bits in byte addressing
• # of pages = 261 / 1K = 251
• Page Table Size = 251 × (PTE size) = 251 × 2 B = 252 B

Inverted page table:


• Total frames = 64K / 1K = 64
• Page Table Size = 64 × (PTE size) = 64 × 2 = 128 B

18
0
Inverted Page Table
Advantages:
• Memory Efficient: Instead of storing entries for all virtual pages, it stores only
one entry per physical frame.
• Good for Large Address Spaces: Ideal for 64-bit architectures, where traditional
page tables become too large.
• Improved Security: Since entries are mapped per physical frame and include a
PID, process isolation is enhanced.

Disadvantages:
• Slow Address Translation: Searching for a VPN-PID pair is slower than direct
indexing in a hierarchical page table.
• Complex Lookup Mechanism: Requires hashing or searching since the table
isn’t indexed by VPN.
18 • Difficult to Implement Page Sharing: Traditional page tables allow shared
1 memory between processes more easily.
Segmentation
• Another non-contiguous memory allocation technique.
• Segmentation is a memory management technique that divides a
process's address space into multiple variable-sized segments based on
logical divisions such as code, data, stack, heap, etc.
• Unlike paging, which divides memory into fixed-size blocks (pages),
segmentation is based on logical units that vary in size.

18
2
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
18
3
User’s View of a Program

18
4
Logical View of Segmentation

4
1

3 2
4

18 user space physical memory space

5
Segmentation
Key Concepts of Segmentation:
• Logical Division of Memory:
– A program is divided into multiple segments (e.g., code segment, data
segment, stack segment).
– Each segment has a name (or number) and a length.

• Segment Table:
• The OS maintains a segment table for each process, containing:
– Segment Number (ID): Unique identifier for each segment.
– Base Address: Starting physical address of the segment.
– Limit (Size): Length of the segment (to prevent accessing beyond allocated
memory).
18
6
Segmentation
Key Concepts of Segmentation
• Address Translation in Segmentation:
• A logical address consists of:
– Segment Number (S)
– Offset (D) within the segment
• The segment number (S) is used to look up the base address (B) from
the segment table.
• The physical address (PA) is computed as:
– PA=Base Address+D
• The OS ensures that D < Limit, preventing out-of-bounds access.

18
7
Segmentation Hardware

18
8
Segmentation
Example of Segmentation
Segment No. Base Address Limit (Size)
0 (Code) 5000 1000
1 (Data) 8000 500
2 (Stack) 9000 300

• If a process generates the logical address (1, 200):


– Segment 1 (Data) → Base Address = 8000
– Offset (200) is within limit (500)
– Physical Address = 8000 + 200 = 8200

18
9
Logical-to-Physical Address Translation in
segmentation

19
0
Example of Segmentation

19
1
Segment Table
• Segment table: stores the information about each segment of the process.
– It has two columns.
– First column stores the base address of the segment in the main memory.
– Second column stores length of the segment.
– Segment table is stored as a separate segment in the main memory.
– Segment table base register (STBR) stores the base address of the segment
table.

19
2
Segment Table

Limit: length or size of the segment.


19 Base: base address of the segment in the MM.

3
Illustration: Segmentation
Problem: Consider the following segment table
Segment No. Base Length
0 1219 700
1 2300 14
2 90 100
3 1327 580
4 1952 96

Which of the following logical address will produce trap addressing error?
A. 0, 430
B. 1, 11
C. 2, 100
D. 3, 425
E. 4, 95
19
4 Calculate the physical address if no trap is produced..
Illustration: Segmentation
In a segmentation scheme, the generated logical address consists of two parts
• Segment Number
• Segment Offset

We know
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset greater or equal to the limit of segment,
– then trap addressing error is produced.

19
5
Illustration: Segmentation
Option-A: 0, 430
Here,
Segment Number = 0
Segment Offset = 430

We have,
In the segment table, limit of segment 0 is 700.
Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]

Now, Since generated segment offset lies in the range, so request generated
is valid.

Therefore, no trap will be produced.


Physical Address = 1219 + 430 = 1649
19
6
Illustration: Segmentation
Option-B: 1, 11
Here,
Segment Number = 1
Segment Offset = 11

We have,
In the segment table, limit of segment-1 is 14.
Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]

Now, Since generated segment offset lies in the range, so request generated
is valid.

Therefore, no trap will be produced.


Physical Address = 2300 + 11 = 2311
19
7
Illustration: Segmentation
Option-C: 2, 100
Here,
Segment Number = 2
Segment Offset = 100

We have,
In the segment table, limit of segment-2 is 100.
Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]

Now,
Since generated segment offset does not lie in the range, so request
generated is invalid.

Therefore, trap will be produced.


19
8
Illustration: Segmentation
Option-D: 3, 425
Here,
Segment Number = 3
Segment Offset = 425

We have,
In the segment table, limit of segment-3 is 580.
Thus, segment offset must always lie in the range = [0, 580-1] = [0, 579]

Now,
Since generated segment offset lies in the range, so request generated is
valid.
Therefore, no trap will be produced.
Physical Address = 1327 + 425 = 1752
19
9
Illustration: Segmentation
Option-E: 4, 95
Here,
Segment Number = 4
Segment Offset = 95

We have,
In the segment table, limit of segment-4 is 96.
Thus, segment offset must always lie in the range = [0, 96-1] = [0, 95]
Now,
Since generated segment offset lies in the range, so request generated is
valid.

Therefore, no trap will be produced.


Physical Address = 1952 + 95 = 2047
20
0 Thus, Option-(C) is correct.
Pros and Cons
Advantages:
• Better logical organization: Segments represent logical divisions (e.g., code,
stack, heap).
• Memory protection: Each segment can have different protection levels (read-
only, executable).
• Supports dynamic memory allocation: Unlike paging, segments are variable-
sized, reducing internal fragmentation.

20
1
Pros and Cons
Disadvantages:
• External fragmentation: Over time, free memory is scattered, leading to
inefficient allocation.
• Variable-size allocation is complex: Requires compaction (memory
defragmentation) to optimize usage.
• Slower than paging: Due to variable sizes, memory lookup is more complex
compared to fixed-size pages.

20
2
Memory Management Techniques
Technique Description Strengths Weaknesses
Fixed Main memory is divided into a number of Simple to Inefficient use of
Partitioning static partitions at system generation time. implement; memory due to
internal
A process may be loaded into a partition of little operating fragmentation;
equal or greater size. system overhead. maximum number
of active processes
is fixed.
Dynamic Partitions are created dynamically, so that No internal Inefficient use of
Partitioning each process is loaded into a partition of fragmentation; processor due to
exactly the same size as that process. the need for
more efficient compaction to
use of main counter external
memory. fragmentation.
Simple Main memory is divided into a number of No external A small amount of
Paging equal-size frames. Each process is divided into fragmentation. internal
a number of equal-size pages of the same fragmentation.
length as frames. A process is loaded
20 by loading all of its pages into available, not
3 necessarily contiguous, frames.
Memory Management Techniques
Technique Description Strengths Weaknesses
Simple Each process is divided into a No internal Improved memory
Segmentation number of segments. fragmentation. utilization and
A process is loaded by loading all of reduced overhead
its segments into dynamic compared to
partitions that need not be dynamic
contiguous. partitioning.
Virtual- As with simple paging, except that No external Overhead of
Memory it is not necessary to load all of the
fragmentation; complex memory
Paging pages of a process. higher degree of management.
multiprogramming;
Nonresident pages that are needed large virtual address
are brought in later automatically. space.
Virtual- As with simple segmentation, No internal Overhead of
Memory except that it is not necessary to fragmentation, higher complex memory
Segmentation load all of the segments of a degree of management
process. mult iprogramming;
Nonresident segments that are large virtual address
20 needed are brought in later space; protection and
4 automatically. sharing support.
Memory Protection
• Memory protection in paging systems is implemented by associating
protection bits with each page frame in the page table.
• These bits define the access permissions (e.g., read, write, execute) for
processes, ensuring that a process does not access memory that it is not
authorized to use.

20
5
Memory Protection
• Memory protection implemented by associating protection bit with
each frame

• Valid-invalid bit attached to each entry in the page table:


– “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
• signifies that the page is in memory
– “invalid” indicates that the page is not in the process’ logical
address space
• signifies that the page may be invalid or haven't brought into the
memory yet.

20
6
Valid (v) or Invalid (i) Bit In A Page
Table

20
7
Memory Protection
• Page Table Entries (PTEs) Contain Protection Information
• Each page table entry (PTE) includes:
– Frame Number (Physical Page Location)
– Protection Bits (Access Permissions)
– Valid/Invalid Bit (Indicates if page is allocated)
– Dirty Bit (Indicates if page was modified)
– Reference Bit (Used for page replacement algorithms)
• Types of Protection Bits
– Read (R) → If set, the process can read from the page.
– Write (W) → If set, the process can write to the page.
– Execute (X) → If set, the page contains executable instructions.
– Kernel/User Mode Bit → Restricts access based on privilege level:
20 • Kernel Mode: OS processes can access.
8 • User Mode: User processes can access only allowed pages.
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical address
space of all processes

• Private code and data


– Each process keeps a separate copy of the code and data
– The pages for the private code and data can appear anywhere in
the logical address space

20
9
Shared Pages Example

21
0

You might also like