Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views47 pages

Memory Management

Uploaded by

vedant1093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views47 pages

Memory Management

Uploaded by

vedant1093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit-5

MEMORY MANAGEMENT

Prepared By:
Ms.Rikita Chokshi,
Assistant Professor, CE, CSPIT,
CHARUSAT

August 14, 2025| U & P U. Patel Department of Computer Engineering 1


CONTENTS
• Multiprogramming with Fixed and variable partitions,
• Paging: Principle Of Operation,
• Page Allocation,
• H/W Support For Paging,
• Multilevel paging, Segmentation,
• Swapping,
• Virtual Memory: Concept,
• Performance of Demand Paging,
• Page Replacement Algorithms,
• Thrashing and Working Sets.

August 14, 2025| U & P U. Patel Department of Computer Engineering 2


Memory Management Basics

• Don’t have infinite RAM


• Do have a memory hierarchy-
• Cache (fast)
• Main(medium)
• Disk(slow)
• Memory manager has the job of using this hierarchy to
create an abstraction (illusion) of easily accessible
memory
Memory Management Basics

• Main memory and registers are only storage, CPU can


access directly
• Register access in one CPU clock (or less)
• Main memory can take many cycles
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct
operation
Memory Management Basics

• The part of the operating system that manages the


memory hierarchy is called the “memory manager”
• Its job is to:
• Efficiently manage memory
• Keep track of which parts of memory are in use
• Allocate memory to processes when they need it
• Deallocate it when they are done
One program at a time in memory

Mainframe/minicomputers Embedded Systems Personal Computers

OS reads program in from disk and it is executed


One program at a time

• Can only have one program in memory at a time.


• Bug in user program can trash the OS (a and c)
• Second on some embedded systems
• Third on MS-DOS (early PCs) -part in ROM called
BIOS
Really want to run more than one program

• Could swap new program into memory from


disk and send old one out to disk
• Not really concurrent
IBM static relocation idea

• IBM 360 -divide memory into 2 KB blocks, and associate a 4 bit protection key with chunk.
Keep keys in registers.
• Put key into PSW for program
• Hardware prevents program from accessing block with another protection key
• 🔹 Basic Concept:
• The main memory is divided into 2 KB blocks.
• Each block is associated with a 4-bit protection key.
• The protection key is stored in a hardware register associated with each memory block.
• 🔹 Program Protection Workflow:
• When a program is running, the Program Status Word (PSW) includes the protection key
assigned to that program.
• When the program tries to access memory, the hardware compares the PSW key with the key
of the target memory block.
• If the keys match, access is allowed.
• If the keys don’t match, the hardware traps the access (i.e., access is denied and an exception
is raised).
Problem with relocation

JMP 28 in program (b) trashes ADD instruction in location 28


Program crashes
Static relocation

• Problem is that both programs reference absolute physical


memory.
• Static relocation- load first instruction of program at address x,
and add x to every subsequent address during loading
• This is too slow and
• Not all addresses can be modified
• Mov register 1,28 can’t be modified
Address Space

• Two problems have to be solved to allow multiple applications


to be in memory at the same time without interfering with each
other: protection & relocation
• A better solution is to invent a new abstraction for memory: the
address space
• Create abstract memory space for program to exist in
• Each program has its own set of addresses
• The addresses are different for each program
• Call it the address space of the program
Address Space

 Logical address – generated by the CPU; also referred to as


virtual address
 Physical address – address seen by the memory unit
 Logical and physical addresses are the same in compile-time
and load-time address-binding schemes;
 logical (virtual) and physical addresses differ in execution-
time address-binding scheme
Base and Limit Registers
• A pair of base and limit registers define the logical address space
• Base contains beginning address of program
• Limit contains length of program
• Used in the CDC 6600 and the Intel 8088
Base and Limit Registers

Let’s assume:
Base Register = 1000
Limit Register = 300
This means:
The program can access physical addresses 1000 to 1299 (i.e., 300 bytes).
Now suppose the program issues an instruction:
MOV R1, [200]
The CPU doesn’t access address 200 directly.
Instead, it adds the base: Physical Address = 200 + 1000 = 1200
Then, it checks:
Is 200 < Limit? (i.e., 200 < 300) → ✅ Yes → Access Allowed
If a program tries to access address 350:350 + 1000 = 1350But 350 > Limit
(300) → ❌ Access Denied → Trap/Error
Base and Limit Registers

Add 16384 to JMP 28.


Hardware adds 16384 to 28 resulting in JMP 16412
Base and Limit Registers
Base and Limit Registers

• In MMU scheme, the value in the relocation register is added to every


address generated by a user process at the time it is sent to memory
• The user program deals with logical addresses; it never sees the real
physical addresses
• Program references memory, adds base address to address generated
by process. Checks to see if address is larger then limit. If so,
generates fault
Base and Limit Registers
Condition Outcome

address < base ❌ Trap to OS (underflow)

address ≥ base + limit ❌ Trap to OS (overflow)

base ≤ address < base + limit ✅ Access granted

•The base register gives the starting point of a program's memory.


•The limit register defines how much memory the program can access.
•These checks are done in hardware via the Memory Management Unit (MMU).
•If a program violates the range, the OS is notified via a trap.
Memory Management Basic Concepts

Address Binding:
•In a system, the user program have symbolic name for address like ptr,addr,and so
on.
•When the program gets compiled, the compiler bind symbolic addresses to
relocate-able addresses and then the linkage editor or loader will bind relocateble
address to absolute address.
•At each step binding is mapping from one address space to another space ,the
binding of user program(instructions and data) to memory address can be done at
any time- compile time , load time or execution time.
Memory Management Basic Concepts

Compile-time binding: If starting location of user program in memory is known at


compile time, then compiler generates absolute code. Absolute code is executable
binary code that must always be loaded at a specific location in memory. If location
of user program changes, then program need to be recompiled.

Load time Binding: If the location of user program in memory is not known at
compile time, the compiler generates relocateble code .If location of user program
changes, then program need not to be recompiled , only user code need to be
reloaded to integrates changes.

Run time binding: Programs (or process) may need to be relocated during runtime
from one memory segment to another. Run-time (execution time) binding is most
popular and flexible scheme, providing we have the required H/W support available
in the system.
How to run more programs then fit in main
memory at once

• Can’t keep all processes in main memory


• Too many (hundreds)
• Too big (e.g. 200 MB program)
• Two approaches
• Swap-bring program in and run it for awhile
• Virtual memory-allow program to run even if only
part of it is in main memory
Swapping, a picture

Can compact holes by copying programs into holes


This takes too much time
Swapping
• A process can be swapped temporarily out of memory to a backing
store, and then brought back into memory for continued execution
• Backing store – fast disk large enough to accommodate copies of
all memory images for all users; must provide direct access to these
memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out so
higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
• Modified versions of swapping are found on many systems (i.e.,
UNIX, Linux, and Windows)
• System maintains a ready queue of ready-to-run processes which
have memory images on disk
Programs grow as they execute

• Stack (return addresses and local variables)


• Data segment (heap for variables which are dynamically
allocated and released)
• Good idea to allocate extra memory for both
• When program goes back to disk, don’t bring holes along
with it!!!
2 ways to allocate space for growth

(a) Just add extra space


(b) Stack grows downwards, data grows upwards
Example

A computer provides each process with 65,536 bytes of address


space divided into pages of 4096 bytes each. A particular program
has a text size of 32,768 bytes, a data size of 16,386 bytes, and a
stack size of 15,870 bytes. Will this program fit in the machine’s
address space? Suppose that instead of 4096 bytes, the page size
were 512 bytes, would it then fit?
Each page must contain either text, data, or stack, not a mixture of
two or three of them.
Page size = 4096 bytes
We compute how many pages are needed for each segment:
📄 Pages required:
Text:32,768 ÷ 4096 = 8 pages
Data:16,386 ÷ 4096 = 4.001 → 5 pages (must round up)
Stack:15,870 ÷ 4096 = 3.877 → 4 pages (must round up)

Total pages used =8 (text) + 5 (data) + 4 (stack) = 17 pages


Total address space:65,536 ÷ 4096 = 16 pages available

❌ Result: Does NOT fit – program needs 17 pages, but only


16 pages are available.
Memory Allocation
• Multiple-partition allocation
– Hole – block of available memory; holes of various size are
scattered throughout memory
– When a process arrives, it is allocated memory from a hole
large enough to accommodate it
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2


Memory Allocation
Memory Allocation
Multiprogramming with Fixed Partition (equal
partitions)
Multiprogramming with Fixed Partition
(Unequal partitions)
Multiprogramming with Fixed Partition
(Unequal partitions)
Managing Free Memory

• Two techniques to keep track of free memory


• Bitmaps
• Linked lists
Bitmaps-the picture

(a)Picture of memory
(b)Each bit in bitmap corresponds to a unit of storage (e.g. bytes) in memory
(c) Linked list P: process H: hole
Bitmaps

• The good-compact way to keep tract of memory


• The bad-need to search memory for k consecutive zeros
to bring in a file k units long
• Units can be bits or bytes or…….
Linked Lists-the picture

Four neighbor combinations for the terminating process, X.


Linked Lists

• Might want to use doubly linked lists to merge holes


more easily
• Algorithms to fill in the holes in memory
• First fit
• Next fit
• Best fit
• Worst fit
• Quick fit
• First-fit and best-fit better than worst-fit in terms of
speed and storage utilization
The fits

• First fit-fast
• Next fit-starts search wherever it is
• Slightly worse
• Best fit-smallest hole that fits
• Slower, results in a bunch of small holes (i.e. worse
algorithm)
• Worst fit-largest hole that fits
• Not good (simulation results)
• Quick fit- keep list of common sizes
• Quick, but can’t find neighbors to merge with
The fits

• Conclusion: the fits couldn’t out-smart the un-


knowable distribution of hole sizes
• core limitation of static memory allocation
strategies — they cannot adapt to or predict
fragmentation patterns, and hence, none is
universally optimal.
Example

•Given six memory partitions of 200KB, 400KB, 200KB, 100KB,


550KB and 25KB in order.
•How would the first-fit and best-fit algorithms place processes of
size 115KB, 400KB, 258KB, 100KB and 175KB in order?
•Rank algorithms how efficiently they use memory.
Example – First Fit
25KB 25KB 25KB

550KB 550KB 550KB

100KB 100KB 100KB

200KB 200KB 200KB

400KB 400KB

200KB 85KB 85KB


Example – First Fit
25KB 25KB 25KB

292KB 292KB 117KB

100KB 100KB 100KB

200KB 100KB 100KB

85KB 85KB 85KB


Example

•A student in an Operating System course proposes a new operating


system to his professor in which a processor does not support virtual
memory. It supports a swapping system in which memory consists of
the following hole sizes in the memory order:
•10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB.
•Which hole is taken for successive segment requests of
– a) 12KB b) 10KB c) 9KB for first fit, best fit and worst fit
Example

•Consider six memory partitions of size 200 KB, 400 KB, 600 KB,
500 KB, 300 KB, and 250 KB, where KB refers to kilobyte.
•These partitions need to be allotted to four processes of sizes 357
KB, 210 KB, 468 KB and 491 KB in that order.
•Apply best fit, first fit and worst fit and next fit algorithms.
•Which memory partitions are NOT allotted to any process if first fit
algorithm is used?
THANK YOU

You might also like