ch6 Memory
ch6 Memory
Fall 2024
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Lecture 6: Memory Management
Background
Swapping
Contiguous Memory Allocation
Segmentation
Paging
Structure of Page Table
Example: Intel 32 and 64-bit Architectures
Example: ARM Architecture
Operating System Concepts – 9th Edition 8.2 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Objectives
To Provide a Detailed Description of Various
Ways of Organizing Memory Hardware
To Discuss Various Memory-Management
Techniques, including Paging and
Segmentation
To Provide a Description of Intel Pentium,
which Supports both Pure Segmentation and
Segmentation with Paging
Operating System Concepts – 9th Edition 8.3 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Background
Program must be Brought (from disk) into Memory
and Placed within a Process for it to be Run
Main Memory (MM) and Registers are Only
Storage CPU can Access Directly
Memory Unit only Sees
A Stream of Addresses + Read requests, or Address +
Data and Write Requests
Register Access in one CPU Clock (or Less)
MM can Take Many Cycles, Causing a Stall
Cache Sits between MM and CPU registers
Protection of Memory required to Ensure Correct
Operation
Operating System Concepts – 9th Edition 8.4 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Base and Limit Registers
A Pair of Base and Limit Registers Define
Logical Address Space
CPU must Check Every Memory Access
Generated in user mode to be sure it is between
base and limit for that user
Operating System Concepts – 9th Edition 8.5 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Hardware Address Protection
Operating System Concepts – 9th Edition 8.6 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Address Binding
Programs on Disk, Ready to be Brought into
Memory to Execute Form an Input Queue
Without support, must be loaded into addr: 0000
Inconvenient to Have First User Process Physical
Address Always at 0000
How can it not be?
Further, Addresses Represented in Different Ways
at Different Stages of a Program’s Life
Source code, compiled code, linker/loader
addresses
Operating System Concepts – 9th Edition 8.7 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Address Binding (cont.)
Different Ways of Addresses Representation
Source code addresses usually symbolic
Compiled code addresses bind to relocatable
addresses
4i.e. “14 bytes from beginning of module”
Linker or loader will bind relocatable addresses
to absolute addresses
4i.e. 74014
Each binding maps one address space to
another
Operating System Concepts – 9th Edition 8.8 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Binding of Instructions and Data to Memory
Address Binding to Memory Addresses can
Happen at Three Different Stages
Compile time: If memory location known a
priori, absolute code can be generated
4Must recompile code if starting location changes
Load time: Must generate relocatable code if
memory location is not known at compile time
Execution time: Binding delayed until run time
if process can be moved during its execution
from one memory segment to another
4Need HW support for address maps (e.g., base and
limit registers)
Operating System Concepts – 9th Edition 8.9 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Multistep Processing of a User Program
Operating System Concepts – 9th Edition 8.10 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Logical vs. Physical Address Space
Concept of a Logical Address Space
Central to Proper Memory Management
Bound to a Separate Physical Address Space
Logical address
Generated by CPU
Aka, virtual address
Physical Address
Address seen by memory unit
Logical and Physical Addresses
Are same in compile-time and load-time
address-binding schemes
Differ in execution-time address-binding scheme
Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Operating System Concepts – 9th Edition 8.11
Logical vs. Physical Address Space (cont.)
Logical Address Space
Set of all logical addresses generated by a
program
Physical Address Space
Set of all physical addresses generated by a
program
Operating System Concepts – 9th Edition 8.12 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Logical vs. Physical Address Space (cont.)
Memory Management Unit (MMU)
HW device that at run-time maps virtual to
physical address
Many methods possible
4Will be discussed in this lecture
To start, Consider Simple Scheme
Value in base register is added to every address
generated by a user process at time it is sent to
memory
Base register now called relocation register
MS-DOS on Intel 80x86 used 4 relocation
registers
Operating System Concepts – 9th Edition 8.13 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Logical vs. Physical Address Space (cont.)
User Program Deals with Logical Addresses
It never sees real physical addresses
Execution-time binding occurs when reference is
made to location in memory
Logical address bound to physical addresses
Operating System Concepts – 9th Edition 8.14 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Dynamic Relocation using
a Relocation Register
Routine not Loaded until it is Called
Better Memory-Space Utilization
Unused routine never loaded
All Routines Kept on Disk in Relocatable Load
Format
Useful when Large Amounts of Code are Needed
to Handle Infrequently Occurring Cases
No Special Support from OS required
Implemented through program design
OS can help by providing libraries to implement dynamic
loading
Operating System Concepts – 9th Edition 8.15 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Dynamic Linking
Static Linking – system libraries and
program code combined by loader into
binary program image
Wastes both disk space and main memory
Dynamic Linking – linking postponed until
execution time
Small piece of code, Stub, used to locate
appropriate memory-resident library routine
Stub replaces itself with address of routine, and
executes routine
Operating System Concepts – 9th Edition 8.16 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Dynamic Linking (cont.)
OS Checks if Routine is in Memory
If not in address space, add to address space
Dynamic Linking is Particularly Useful for
Libraries
Processes that use a language library execute
only one copy of library code
Libraries updates will be automatically applied
Operating System Concepts – 9th Edition 8.17 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Swapping
A Process can be Swapped Temporarily out
of Memory to a Backing Store, and Brought
back into Memory for Continued Execution
Why Swap out?
Quantum of round robin expired è need to bring
a new process è not enough memory space
4Total physical memory space of processes can
exceed physical memory
Backing store
Fast disk large enough to accommodate copies
of all memory images for all users
Operating System Concepts – 9th Edition 8.18 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Swapping (cont.)
Major part of Swap Time is Transfer Time
Total transfer time is directly proportional to
amount of memory swapped
System Maintains a Ready Queue of ready-
to-run processes
Which have memory images on disk (or in main
memory)
Operating System Concepts – 9th Edition 8.19 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Schematic View of Swapping
Operating System Concepts – 9th Edition 8.20 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Context Switch Time and Swapping
Next Processes to be Put on CPU Not in MM
Need to swap out a process & swap in target process
Context Switch Time can then be Very High
Example: 100MB Process
Swapping to HDD with transfer rate of 50MB/sec
Swap out time of 2000 ms
Plus swap in of same sized process
Total context switch swapping component time
of 4000ms (4 seconds)
Operating System Concepts – 9th Edition 8.21 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Swapping (cont.)
Q: Does Swapped out Process Need to
Swap Back into Same Physical Addresses?
Ans: Depends on address binding method
Standard Swapping is Too Time-Consuming
è Modified Versions of Swapping Found on
Many Systems (i.e., UNIX, Linux, and Windows)
Swapping normally disabled
Started if more than threshold amount of
memory allocated
Disabled again once memory demand reduced
below threshold
Operating System Concepts – 9th Edition 8.22 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Context Switch Time and Swapping (cont.)
Can reduce Context Switch Time
If reduce size of memory swapped
4by knowing how much memory really being used
System calls to inform OS of memory use via
request_memory() and release_memory()
Other Constraints as well on Swapping
Pending I/O – can’t swap out as I/O would occur
to wrong process
Or always transfer I/O to kernel space, then to
I/O device
4Known as double buffering, adds overhead
Operating System Concepts – 9th Edition 8.23 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Memory Management Schemes
Main Memory must support OS + User
Processes
Limited Resource è must allocate efficiently
Memory Management Schemes
Contiguous Allocation
Segmentation
Paging
Segmentation + Paging
Operating System Concepts – 9th Edition 8.24 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Contiguous Allocation
Contiguous Allocation is one Early Method
Main Memory Usually into two partitions:
Resident OS, usually held in low memory with
interrupt vector
User processes then held in high memory
Each process contained in single contiguous
section of memory
Operating System Concepts – 9th Edition 8.25 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Contiguous Allocation (cont.)
Relocation Registers used to Protect User
Processes from each other, and from
Changing OS code and data
Base register contains value of smallest physical
address
Limit register contains range of logical
addresses – each logical address must be less
than the limit register
MMU maps logical address dynamically
Can then allow actions such as kernel code
being transient and kernel changing size
Operating System Concepts – 9th Edition 8.26 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
HW Support for Relocation and
Limit Registers
Operating System Concepts – 9th Edition 8.27 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Multiple-Partition Allocation
Fixed-Sized Partitions
Each partition only one process
Degree of multiprogramming limited by number
of partitions
Originally used in IBM OS/360
No longer in use
Operating System Concepts – 9th Edition 8.28 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Multiple-Partition Allocation (cont.)
Variable Partition Scheme
Sized to a given process’ needs
Hole – block of available memory
4Holes of various size are scattered throughout memory
When a process arrives, it is allocated memory
from a hole large enough to accommodate it
Process exiting frees its partition
4Adjacent free partitions combined
OS maintains information about:
a) allocated partitions b) free partitions (hole)
Operating System Concepts – 9th Edition 8.29 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Multiple-Partition Allocation (cont.)
Operating System Concepts – 9th Edition 8.30 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
First-Fit: Allocate first hole that is big
enough
Best-Fit: Allocate smallest hole that is big
enough; must search entire list, unless
ordered by size
Produces smallest leftover hole
Worst-Fit: Allocate largest hole; must also
search entire list
Produces largest leftover hole
First-fit and best-fit better than worst-fit in terms of
speed and storage utilization
Operating System Concepts – 9th Edition 8.31 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Fragmentation
External Fragmentation – Total memory
space exists to satisfy a request, but it is not
contiguous
Internal Fragmentation – Allocated memory
may be slightly larger than requested
memory; this size difference is memory
internal to a partition, but not being used
First fit analysis reveals that given N blocks
allocated, 0.5 N blocks lost to fragmentation
1/3 may be unusable -> 50-percent rule
Operating System Concepts – 9th Edition 8.32 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Fragmentation (cont.)
Solutions to External Fragmentation
Compaction
4Shuffle memory contents to place all free memory
together in one large block
– E.g., move all used blocks to one end of memory
4Compaction is possible only if relocation is dynamic,
and is done at execution time
Non-contiguous memory allocation scheme
4Segmentation
4Paging
Operating System Concepts – 9th Edition 8.35 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Logical View of Segmentation
4
1
3 2
4
Operating System Concepts – 9th Edition 8.36 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Segmentation Architecture
Logical Address consists of a Two Tuple:
<segment-number, offset>,
Segment table – maps two-dimensional physical
addresses; each table entry has:
base – contains starting physical address where
segments reside in memory
limit – specifies length of segment
Segment-table base register (STBR) points to
segment table’s location in memory
Segment-table length register (STLR) indicates
number of segments used by a program;
segment number s is legal if s < STLR
Operating System Concepts – 9th Edition 8.37 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Segmentation Architecture (cont.)
Protection
With each entry in segment table associate:
4validationbit = 0 Þ illegal segment
4read/write/execute privileges
Operating System Concepts – 9th Edition 8.38 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Segmentation Hardware
Operating System Concepts – 9th Edition 8.39 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Example of Segmentation
Operating System Concepts – 9th Edition 8.40 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Segmentation vs. Variable-Sized
Contiguous Allocation
Variable-Sized Contiguous Allocation
Needs to bring entire process into memory
4Both code, data, stack, …
Segmentation
Break program into different segments
Brings segments to memory on demand
No need to bring unused library methods or
unused code & data segments to memory
Operating System Concepts – 9th Edition 8.41 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Problems with Segmentation?
Example
Code segment: 10MB
Data segment: 100MB
Stack segment: 20MB
Lib segment: 10MB
Problem?
Still significant amount of fragmentation
4Both external & internal fragmentation
Operating System Concepts – 9th Edition 8.42 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging
Physical Address Space of a Process can be
Non-Contiguous
Process is allocated physical memory whenever
latter is available
Avoids external fragmentation
Avoids problem of varying sized memory chunks
Divide Physical Memory into Fixed-sized
blocks called Frames
Size is power of 2, between 512 bytes and 16
Mbytes
Operating System Concepts – 9th Edition 8.43 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging (cont.)
Divide Logical Memory into Blocks of Same
Size called Pages
Keep Track of all Free Frames
To Run a Program of size N pages, Need to
Find N Free Frames and Load Program
Set up a Page Table to Translate Logical to
Physical Addresses
Backing Store Likewise Split into Pages
Still have Internal Fragmentation
Operating System Concepts – 9th Edition 8.44 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Address Translation Scheme
Address Generated by CPU divided into:
Page number (p) – used as an index into a
page table which contains base address of
each page in physical memory
Page offset (d) – combined with base address
to define physical memory address that is sent
to memory unit page number page offset
p d
m -n n
4Logical address space: 2m
4Page size: 2n
Operating System Concepts – 9th Edition 8.45 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging Hardware
Operating System Concepts – 9th Edition 8.46 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging Model of Logical and
Physical Memory
Operating System Concepts – 9th Edition 8.47 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging Example
Operating System Concepts – 9th Edition 8.50 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Free Frames
Operating System Concepts – 9th Edition 8.53 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Implementation of Page Table (cont.)
Some TLBs Store Address-Space
Identifiers (ASIDs) in each TLB entry
Uniquely identifies each process to provide
address-space protection for that process
Otherwise need to flush at every context switch
On a TLB Miss, Value Loaded into TLB for
Faster Access Next Time
Replacement policies must be considered
Some entries wired down for permanent fast
access (e.g., kernel code pages)
Operating System Concepts – 9th Edition 8.54 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Associative Memory
Associative Memory – Parallel Search
Page # Frame #
Operating System Concepts – 9th Edition 8.55 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Paging Hardware With TLB
Operating System Concepts – 9th Edition 8.56 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Effective Access Time
Associative Lookup time unit (e)
Can be < 10% of memory access time
Hit ratio = a
Hit ratio – percentage of times that a page
number is found in associative registers; ratio
related to number of associative registers
Consider a = 80%, e = 20ns for TLB search,
100ns for memory access
Effective Access Time (EAT)
EAT = (1 + e) a + (2 + e)(1 – a)
=2+e–a Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Operating System Concepts – 9th Edition 8.57
Effective Access Time (cont.)
Example 1:
Consider a = 80%, e = 20ns for TLB search,
100ns for memory access
EAT = 0.80 x 100 + 0.20 x 200 = 120ns
Example 2:
Consider more realistic hit ratio
a = 99%, e = 20ns for TLB search, 100ns for
memory access
EAT = 0.99 x 100 + 0.01 x 200 = 101ns
Operating System Concepts – 9th Edition 8.58 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Memory Protection
Memory Protection Implemented by Associating
Protection bit with each Frame to Indicate if Read-
Only or Read-Write Access is Allowed
Can also add more bits to indicate page execute-only
Valid-Invalid bit Attached to each Entry in Page
Table
“valid” indicates that associated page is in process’
logical address space, and is thus a legal page
“invalid” indicates that page is not in process’ logical
address space
Or use page-table length register (PTLR)
Any Violations Result in a Trap to Kernel
Operating System Concepts – 9th Edition 8.59 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Valid (v) or Invalid (i) Bit In A Page Table
Operating System Concepts – 9th Edition 8.60 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Shared Pages
Shared code
One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems)
Similar to multiple threads sharing same process space
Also useful for inter-process communication if sharing of
read-write pages is allowed
Private Code and Data
Each process keeps a separate copy of code and data
Pages for private code and data can appear anywhere in
logical address space
Operating System Concepts – 9th Edition 8.61 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Shared Pages Example
Operating System Concepts – 9th Edition 8.62 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Structure of Page Table
Memory Structures for Paging can Get Huge using
Straight-Forward Methods
Consider a 32-bit logical address space as on modern
computers
Page size of 4 KB (212)
Page table would have 1 million entries (232 / 212)
If each entry is 4 bytes -> 4 MB of physical address
space / memory for page table alone
4 That amount of memory used to cost a lot
4 Don’t want to allocate that contiguously in main memory
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
Operating System Concepts – 9th Edition 8.63 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Hierarchical Page Tables
Break up Logical Address Space into
Multiple Page Tables
A Simple Technique is a Two-Level Page
table
We then Page the page table
Operating System Concepts – 9th Edition 8.64 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Two-Level Page-Table Scheme
Operating System Concepts – 9th Edition 8.65 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Two-Level Paging Example
A Logical Address (on 32-bit machine with 1K
page size) Divided into:
A page number consisting of 22 bits
A page offset consisting of 10 bits
Operating System Concepts – 9th Edition 8.67 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
64-bit Logical Address Space
Even two-level paging scheme not Sufficient
If Page Size is 4 KB (212)
Then page table has 252 entries
If 2-level scheme, inner page tables have 210 4-byte
entries
Address would look like
Outer page table has 242 entries or 244 bytes
One solution is to add a 2nd outer page table
But in following example the 2nd outer page table is still
234 bytes in size
4 Possibly 4 memory access to get to one physical memory location
è hierarchical paging not appropriate for 64-bit addressing
Operating System Concepts – 9th Edition 8.68 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Three-Level Paging Scheme
Operating System Concepts – 9th Edition 8.69 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Hashed Page Tables
Common in Address Spaces greater than 32 bits
Virtual Page Number Hashed into a Page Table
This page table contains a chain of elements
hashing to same location
Each element contains (1) virtual page
number (2) value of mapped page frame (3) a
pointer to next element
Virtual page numbers are compared in this
chain searching for a match
If a match is found, corresponding physical frame
is extracted
Operating System Concepts – 9th Edition 8.70 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Hashed Page Table
Operating System Concepts – 9th Edition 8.71 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Inverted Page Table
One Entry for each Real Page of Memory
Rather than each process having a page table
and keeping track of all possible logical pages,
track all physical pages
Entry consists of virtual address of page stored
in that real memory location, with information
about process that owns that page
Pros
Decreases memory needed to store each page
table
Operating System Concepts – 9th Edition 8.72 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Inverted Page Table (cont.)
Cons
Increases time needed to search table when a
page reference occurs
4Stored by physical addresses
Operating System Concepts – 9th Edition 8.73 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Inverted Page Table Architecture
Operating System Concepts – 9th Edition 8.74 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Example: Intel 32 and 64-bit Architectures
Dominant Industry Chips
Pentium CPUs are 32-bit
Called IA-32 architecture
Current Intel CPUs are 64-bit
Called IA-64 architecture
Operating System Concepts – 9th Edition 8.75 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Example: Intel IA-32 Architecture
Supports both segmentation and
segmentation with paging
Each segment can be 4GB
Up to 16K segments per process
Divided into two partitions
4First partition of up to 8 K segments are private to
process (kept in local descriptor table (LDT))
4Second partition of up to 8K segments shared among
all processes (kept in global descriptor table (GDT))
Operating System Concepts – 9th Edition 8.76 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Example: Intel IA-32 Architecture (cont.)
CPU Generates Logical Address
Selector given to segmentation unit
4Which produces linear addresses
Operating System Concepts – 9th Edition 8.77 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Logical to Physical Address Translation in IA-32
Operating System Concepts – 9th Edition 8.78 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Reading Assignment
Clustered Page Table
Paging in Oracle SPARC Solaris
Detailed Paging in IA-32 and IA-64
Operating System Concepts – 9th Edition 8.79 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Hashed Page Tables (cont.)
Variation for 64-bit Addresses is Clustered
Page Tables
Similar to hashed but each entry refers to
several pages (such as 16) rather than 1
Especially useful for sparse address spaces
(where memory references are non-contiguous
and scattered)
Operating System Concepts – 9th Edition 8.80 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Oracle SPARC Solaris
Consider modern, 64-bit OS example with tightly
integrated HW
Goals are efficiency, low overhead
Based on hashing, but more complex
Two Hash Tables
One kernel and one for all user processes
Each maps memory addresses from virtual to physical
memory
Each entry represents a contiguous area of mapped
virtual memory
4 More efficient than having a separate hash-table entry for each
page
Each entry has base address and span (indicating the
number
Operating System th
of pages the entry
Concepts – 9 Edition 8.81 represents)
Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Oracle SPARC Solaris (cont.)
TLB holds translation table entries (TTEs) for fast
hardware lookups
A cache of TTEs reside in a translation storage buffer
(TSB)
4 Includes an entry per recently accessed page
Operating System Concepts – 9th Edition 8.84 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Intel IA-32 Paging Architecture
Operating System Concepts – 9th Edition 8.85 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
Intel IA-32 Page Address Extensions
32-bit address limits led Intel to create page
address extension (PAE), allowing 32-bit apps
access to more than 4GB of memory space
Paging went to a 3-level scheme
Top two bits refer to a page directory pointer table
Page-directory and page-table entries moved to 64-
bits in size
Net effect is increasing address space to 36 bits –
64GB of physicalpage
memory
directory page table offset
31 30 29 21 20 12 11 0
4-KB
page
CR3
register page directory page page
Operating System Concepts – 9th Edition 8.86 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
pointer table directory table
Intel x86-64
Current generation Intel x86 architecture
64 bits is ginormous (> 16 exabytes)
In practice only implement 48 bit addressing
Page sizes of 4KB, 2MB, 1GB
Four levels of paging hierarchy
Can also use PAE so virtual addresses are 48 bits
and physical addresses are 52 bits
Operating System Concepts – 9th Edition 8.87 Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024
End of Lecture 6
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013, Edited by H. Asadi, Fall 2024