Chapter 5
Chapter 5
CHAPTER 5
Memory Management
V.1. Introduction
Memory is the second most important resource after the processor in a computer. Although there
are several memory elements like registers, cache, Read Only Memory (ROM), Random Access
Memory (RAM), and secondary memory (hard disk), in a computer, RAM is known as the main
memory.
Although registers and cache memory are closer to the processor that a processor can directly
access, they are very expensive and thus of very small capacity. They cannot accommodate either
the operating system or other programs that are executed by the processor. Main memory is the
furthest memory unit from a processor that it can directly access which can accommodate the OS
as well as user applications during their execution.
The OS kernel remains loaded in the low memory region of the memory as long as the system
runs. Once loaded, the OS divides the memory into two parts: kernel space for storing the OS; and
the user space for storing the application processes. In a single-programming system, the OS
remains in the kernel space and only one application program can reside in the user space.
But in today’s multiprogramming environment, the OS needs to further divide the user space so
that several user programs can coexist in the memory. How many programs can be accommodated
in the memory decides the degree of multiprogramming.
As opposed to the CPU, which only one process (or thread) owns it at any given time, numerous
processes and/or threads may coexist in main memory at any point of time. All these processes
must share the main memory. If the mechanisms for managing this sharing are inefficient, the
computer will perform poorly, no matter how powerful its processor. On the other hand, the faster
the CPU, the faster your system reacts, so it's up to the operating system to organize it in the best
possible way to get the maximum performance out of it.
To maximize CPU performance in a multi-programmed system, a number of questions need to be
asked concerning :
➢ How to share data between processes?
➢ How can memory space be protected for each process?
➢ How and by whom is memory space allocated to processes?
➢ How and when is space freed up by a process restored?
To meet these requirements, the OS needs a special memory management module called the
Memory Management Unit (MMU).
In this chapter, we'll see that there are several memory management schemes, with their advantages
and disadvantages.
Relocation Register
1500
The base register is now called a relocation register. The value in the relocation register is added to
every address generated by a user process at the time the address is sent to memory (Figure 5.3).
For example, if the base is at 15000, then an attempt by the user to address location 0 is dynamically
relocated to location 15000; an access to location 243 is mapped to location 15346.
It should be noted that the user program never sees the real physical addresses, it only manipulates
logical addresses. As we have two different type of addresses Logical address in the range (0 to
Max) and Physical addresses in the range(R to R + Max) where R is the value of relocation
register. The user generates only logical addresses and thinks that the process runs in location to
0 to Max. As it is clear from the above text that user program supplies only logical addresses, these
logical addresses must be mapped to physical address before they are used.
User Space
400K
Operating
System
0K Low Addresses
Figure 5.4 : Memory Organization in a Single-Programmed System.
When the system is organized in this way, only one process at a time can be running. As soon as
the user types a command, the operating system copies the requested program from disk to
memory and executes it. When the process finishes, the operating system displays a prompt
character and waits for a new command. When it receives the command, it loads a new program
into memory, overwriting the first one.
The allocation algorithm can be described by the flowchart below:
Begin
No
Load Program
Execute Program
V.5.1. Issues
In mono-programming, if the active process, the only one in Main Memory, performs a Blocking
I/O operation. The CPU is idle for the duration of the I/O, even if the I/O system is equipped
with DMA (Direct Memory Access)!
If we accept single-process execution, we cannot allow the CPU to be idle for a long time. We need
to find a way to fill this idle time.
Operating
System 1
The size of a program can exceed the size of the Main Memory. To overcome this limitation in a
single-programmed system, the programmer must divide his program at design time into a set of
modules and load them dynamically at runtime into the Main Memory, so that he keeps only the
modules he actually needs. Each new module loaded takes the place of the module to be unloaded.
The idea behind overlays is to only load the necessary parts of a program into memory at a given
time, freeing up memory for other tasks. The unused portions of the program are kept on disk or
other storage, and are loaded into memory as needed. This allows programs to be larger than the
available memory, but still run smoothly.
The concept of overlays is that whenever a process is running it will not use the complete program
at the same time, it will use only some part of it. Then overlays concept says that whatever part you
required, you load it and once the part is done, then you just unload it, means just pull it back and
get the new part you required and run it.
Memory Management in a
Multi-Programmed System
Contiguous Non-Contiguous
Allocation Allocation
Paginated
Fixed Variable Pagination Segmentation segmentation
Partitioning. Partitioning. system system
system
The system keeps tracks of the free disk blocks for allocating space to files when they are created.
Also, to reuse the space released from deleting the files, free space management becomes crucial.
The system maintains a free space list which keeps track of the disk blocks that are not allocated
to some file or directory. When memory is assigned dynamically, the operating system must manage
it. In general terms, there are two ways to keep track of memory usage: bitmaps and free lists
The main memory is divided into a set of allocation blocks ranging from a few bytes to a few KB.
To manage these blocks, each one is assigned a bit indicating its availability:
The bit can take two values: 0 and 1:
➢ The main problem with it is that when it has been decided to bring a k unit process into
memory, the memory manager must search the bitmap to find a run of k consecutive 0 bits
in the map.
➢ This technique is rarely used, as the search method is slow.
V.6.1.2. Linked Lists
Another way of keeping track of memory is to maintain a linked list of allocated and free memory
segments, where a segment is either a process or a hole between two processes. A segment is a
set of consecutive allocation units. An element of the list is made up of four fields indicating :
Figure 5.8 : Main Memory State Represented by Bitmap and Linked Linear List.
V.6.2. Contiguous Allocation
This strategy represents a simple technique for implementing multiprogramming. The main
memory is divided into separate regions or memory partitions; each partition has its own address
space. Memory partitioning can be static (fixed) or dynamic (variable). Each process is loaded
completely into memory. The executable contains relative addresses, and the actual addresses are
determined at the time of loading.
This solution involves dividing the memory into fixed partitions, not necessarily of equal size, when
the system is initialized. The partitions can be of either equal or unequal size (see figure 5.9). On
the other hand, the number of these partitions is prefixed (by the system or, possibly, by the
constructor) and their sizes are different, but also prefixed. One of the problems in designing such
a system is determining the right number and size.
OS OS
(512 K) (512 K)
128 K
128 K
512 K
128 K
256 K
512 K
348K
512 K 512 K
512K P4
PCB1 PCB2 PCB3 PCB4
256K P3
Figure 5.10 : Contiguous Allocation Scheme for Multiple Fixed Partitioning Using PDT.
B) Queue Ready
Programs that have not been able to fit into memory are placed in a queue (a single queue or a
queue per partition).
➢ In the case of a queue per partition : each new process is placed in the queue of the
smallest partition that can contain it (see figure 5.11a). This can lead to a process being kept
waiting in one queue, while another partition that can contain it is free.
➢ The alternative to this approach is to use a single queue: as soon as a partition
becomes free, the system places the first process in the queue that can fit (see figure 5.11b).
In this approach, there are different strategies for allocating a free partition to a waiting
process:
✓ As soon as a partition becomes available, it is allocated to the first process in the queue
that can fit on it. The disadvantage is that a large partition can be assigned to a small
process, wasting a lot of space.
✓ As soon as a partition becomes available, it is allocated to the largest process in the
queue that can fit in it. The disadvantage is that small processes are penalized.
Hence, fixed partitioning contiguous memory allocation is not seen in modern systems. IBM
OS/MFT (Multiprogramming with a Fixed number of Tasks) - an early mainframe OS had
implementation of fixed partitioning.
Disadvantages
➢ Internal fragmentation: When a process does not occupy all the space in the partition.
The remaining space is unusable.
➢ Queue management problems (overloading, Wasting memory, etc.).
➢ The degree of multi-programming is limited by the number of partitions.
OS OS
(512 K) (512 K)
New 128 K
New 128 K
Processes Processes
128 K 128 K
128 K 128 K
256 K 256 K
348K 348K
512 K 512 K
Example 1:
Each partition has Single Queue OS
OS
its own queue (512K) (512K)
Wasted Memory and internal fragmentation in fixed-partition systems led to the development of
variable partitions. In this case, memory is allocated dynamically (variable partitions), according to
process demand and load time. Each program is allocated a partition exactly equal to its size.
When a program finishes execution, its partition is recovered by the system and allocated to another
program, either fully or partially, depending on demand. We are no longer limited by partitions that
are too large or too small, as with fixed partitions. This improved use of main memory requires a
more complex allocation and liberation (release) mechanism; for example, a list of available
memory spaces must be maintained.
As this example shows, this method starts out well, but eventually it leads to a situation in which
there are a lot of small holes in memory. As time goes on, memory becomes more and more
fragmented, and memory utilization declines. This phenomenon is referred to as external
fragmentation, indicating that the memory that is external to all partitions becomes increasingly
fragmented. This is in contrast to internal fragmentation, referred to earlier.
OS OS OS
The following requests
came in order: P1 (128 K) Free (128K)
1) P1 : 128 K
2) P2 : 512 K The following processes
3) P3 : 256 K P2 (512K) P2 (512K)
have been completed:
4) P4 : 128 K
Free
5) P5 : 400 K P1 : P3
(1280K) Free
P3 (256K)
(256K)
P4 (128K) P4 (128K)
Free Free
(256K) (256K)
At start, user P5 Ready P5 Ready
space is empty
Figure 4.13 : Example of Variable Partitioning With External Fragmentation.
Disadvantages
➢ External fragmentation: When a process cannot find a suitable partition size when the
sum of free partitions is sufficient.
How can the issue of external fragmentation be effectively managed, as it keeps on increasing with
continuous arrival of newer processes? Two popular solutions applied are:
➢ Compaction
➢ Placement algorithms.
A) Compaction
One technique for overcoming external fragmentation is compaction: From time to time, the
operating system shifts the processes so that they are contiguous and so that all of the free memory
is together in one block.
The difficulty with compaction is that it is a time-consuming procedure and wasteful of processor
time. Note that compaction implies the need for a dynamic relocation capability. That is, it must
be possible to move a program from one region to another in main memory without invalidating
the memory references in the program.
On the other hand, the compaction operation is performed when a program requesting execution
cannot find a large enough partition, but its size is smaller than the existing external fragmentation.
Example 2:
In the example below (see figure 5.14), the number of free zones is 4. None of these zones satisfies
one of the three waiting processes, but their sum satisfies all three.
So, it is in our interest to pick up the free space. This is achieved by moving the loaded programs
to form a single free zone. This operation is known as Garbage Collection.
B) Fragmentation
The disadvantage of contiguous memory allocation is fragmentation. There may be enough free
memory to load a process, but no partition is of sufficient size. There are two types of
fragmentation, namely, internal fragmentation and External fragmentation.
OS OS OS
P6 P5 P4
(80K) (75K) (90K) Free (70K) P1 (40K) P1 (40K)
➢ Internal fragmentation: Allocated memory may be slightly larger than required memory.
This difference is called internal fragmentation - memory that is internal to a partition but
not used.
➢ External fragmentation: occurs when there is sufficient total memory space to satisfy a
request, but it is not contiguous; memory is fragmented into a large number of small holes
(i.e. free blocks) where a program cannot be loaded into any of these holes.
C) Placement Algorithms
Because memory compaction is time consuming. Another way out to minimize the fragmentation
is done while placing new processes into the available holes. The OS maintains a list of available
holes with their sizes when processes leave the memory. Before a new process is allocated space,
the search is made on the list to find the most appropriate hole.
There are different algorithms for allocating holes to the requesting processes. The aim of all these
algorithms is to maximize the memory space occupied, in other words, to reduce the probability
of situations where a process cannot be served, even if there is enough memory. There are three
main allocation strategies:
➢ First fit. Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or at the location where the previous first-fit search ended. We can stop
searching as soon as we find a free hole that is large enough.
➢ Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
➢ Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is sorted
by size. This strategy produces the largest leftover hole, which may be more useful than the
smaller leftover hole from a best-fit approach.
Simulations have shown that both first fit and best fit are better than worst fit in terms of decreasing
time and storage utilization. Neither first fit nor best fit is clearly better than the other in terms of
storage utilization, but first fit is generally faster.
OS OS OS OS
P2 P2 P2 P2
P4 request 90K
P4 : 90 Free (140K) Free (140K)
Free (140K)
Free (50K)
P3 P3 P3 P3
Free (100K) P4 : 90 Free (100K)
Free (100K) Free (10K)
P1 P1 P1 P1
P4 : 90
Free (220K) Free (220K) Free (220K)
Free (130K)
Initial State First-fit Best-fit Worst-fit
Figure 5.15 : Example Illustrating Placement Algorithms.
D) Buddy System
Fixed partitioning limits the number of active processes depending on the pre-decided partitions.
Dynamic partitioning is a complex scheme to manage and requires time-consuming compaction.
A compromise between the two is the buddy system.
Memory is dynamically divided in multiples of 𝟐𝒌 words ( 𝑳 ≤ 𝒌 ≤ 𝑼 / 𝑳, 𝒌, 𝑼 𝝐 ℕ) , where :
➢ 𝟐𝑳 = smallest size block that is allocated
➢ 𝟐𝑼 = largest possible block-size that is allocated; generally, 𝟐𝑼 is the size of the entire
memory available for allocation
To begin, the entire space available for allocation is treated as a single block of size 𝟐𝑼 .
➢ If a request of size 𝑺 such that 𝟐𝑼−𝟏 < 𝑆 ≤ 𝟐𝑼 is made, the entire block is allocated to it.
➢ Otherwise, the block is split into two equal buddies of size 𝟐𝑼−𝟏.
✓ If 𝟐𝑼−𝟐 < 𝑆 ≤ 𝟐𝑼−𝟏, then the request is allocated to one of the two buddies.
✓ Otherwise, one of the buddies is split in half again.
➢ This process continues until the smallest block greater than or equal to 𝑺 is generated and
allocated to the request.
➢ At any time, the buddy system maintains a list of holes (unallocated blocks) of each size 𝟐𝒊 .
➢ A hole may be removed from the (𝒊 + 𝟏) list by splitting it in half to create two buddies
of size 𝟐𝒊 in the 𝒊 list.
➢ Whenever a pair of buddies on the 𝒊 list both become unallocated, they are removed from
that list and coalesced into a single block on the (𝒊 + 𝟏) list.
➢ Presented with a request for an allocation of size 𝒌 such that 𝟐𝒊−𝟏 < 𝑘 ≤ 𝟐𝒊 , the following
recursive algorithm get_hole is used to find a hole of size 𝟐𝒊 :
void get_hole(int i) {
if (i == (U + 1)) <failure>;
if (<i_list empty>) {
get_hole(i + 1);
<split hole into buddies>;
<put buddies on i_list>;
}
<take first hole on i_list>;
}
➢ Even though the buddy system minimizes external fragmentation to some extent, internal
fragmentation is very much there as we have to allocate space in the size of 𝟐𝒊 words,
whereas need may be much less.
➢ The major drawback of this method is its internal fragmentation. A process requiring
𝟐𝒏 + 𝟏 (e.g. 257) will have a block of 𝟐𝒏+𝟏 (e.g. 512).
➢ Also, two empty blocks can only be merged if they are next to each other. Otherwise,
compaction is necessary.
Example 2:
Figure 5.16 gives an example using a 1-Mbyte initial block. Initially, the memory is empty.
Free
Memory (1 Mbyte) Zones
Initially 1 M (1024K) 1
Request (A) 100K A=128K 128K 256K 512K 3
Request (B) 240K A=128K 64 64 B = 256K 512K 3
Request (C) 64K A=128K C 64 B = 256K 512K 2
Request (D) 256K A=128K C 64 B = 256K D = 256K 256K 2
Release B A=128K C 64 256K D = 256K 256K 3
Release A 128K C 64 256K D = 256K 256K 4
Request (E) 75K E=128K C 64 256K D = 256K 256K 3
Release C E=128K 128K 256K D = 256K 256K 3
Release E 512K D = 256K 256K 2
Release D 1 M (1024K) 1
Figure 5.16 : Example of Buddy System
Previous allocation techniques, from the contiguous allocation approach, have several limitations:
1) At allocation time, the manager must find sufficient contiguous free space to accommodate
the program. Even if there are free fragments whose sum is sufficient, contiguous allocation
refuses to accommodate it. As the size of a program increases, the chances of finding such
space decrease.
2) Frequent external fragmentation due to space allocation and deallocation. Compacting used
to be the solution, but it's no longer the best one!
3) Programs whose size exceeds main memory are impossible to run.
To solve these problems, the memory manager needs to change its view of the unbreakable
(undivided) nature of a program. Fragments of free space scattered throughout the main
memory can be exploited if we consider the program to be an entity that can be split into several
unbreakable chunks, each of which can occupy a free zone that is sufficient for it independently of
the others.
The main aim of non-contiguous allocation is to be able to load a process while making the most
of all memory holes.
➢ If the program chunks are of fixed and equal size, this is known as a Paging.
➢ If the program chunks are of variable size, this is referred to as a Segmentation.
➢ These two techniques can be combined, in which case we speak of the Paged
Segmentation technique, which is the combination of paging and segmentation at the
same time.
V.6.3.1. Paging
1) Principle of operation
In the paging mechanism, the program address space is divided into fixed-sized chunks (blocks)
called PAGES. This space is called the logical program space. In turn, the physical memory (Main
Memory) space is itself divided into fixed-sized chunks (blocks) called FRAMES or PAGE
FRAMES (see figure 5.17).
The pages and frames are always the same size. This size is defined by the hardware and the target
OS. Generally, it is power of 2, between 512 Bytes and 8192 Bytes, depending on the computer
architecture. We show in this section that the wasted space in memory for each process is due to
internal fragmentation consisting of only a fraction of the last page of a process. There is no
external fragmentation.
Physical Space (Main Memory)
Process Frame 0
Page 0 Frame 1
Page 1 Frame 2
Page 2 Frame 3
Page 3 Frame 4
Page 4 Frame 5
Frame 6
Logical Space
Frame 7
➢ When a program is loaded, each page is loaded into any free slot in main memory. The set
of slots housing the individual pages is not necessarily contiguous.
➢ In a pure paging system, a program can only be loaded if there is a number of free slots
equal to the number of pages in the program.
➢ If the size L of a program is not a multiple of the size of a page, then the last page in the
program's logical space will not be completely filled. This empty space is only on the last
page, causing internal fragmentation.
➢ There is no external fragmentation in the pagination either.
…. P d
F d
Mov Ax, adr
…. Physical @
P
MMU
Page Table
If the size of the logical address space is 𝟐𝒎 , and a page size is 𝟐𝒏 bytes, then the high-order
𝒎 − 𝒏 bits of a logical address designate the page number, and the 𝒏 low-order bits designate the
page offset. Thus, the logical address is as follows:
where 𝑷 is an index into the page table and 𝒅 is the displacement within the page.
m-1 n-1 0
P d
Page Number Page Offset
Figure 5.20 : The Paged Address
A logical address is referred in terms of page number and offset within the page as a tuple
< 𝒑𝒂𝒈𝒆#, 𝒐𝒇𝒇𝒔𝒆𝒕 >. So, if 𝑻 is the size of a page and 𝑼 is a logical address, then the paged
address < 𝑷, 𝒅 > is deduced from the following formulas:
➢ 𝑷 = 𝑼 𝑫𝒊𝒗 𝑻 (where, Div is the integer division)
➢ 𝒅 = 𝑼 𝑴𝒐𝒅 𝑻 (where Mod is the remainder of the integer division)
4) Get the physical address?
The physical address corresponding to a logical address 𝑳𝒐𝒈𝒊𝒄𝒂𝒍 @ = < 𝑷, 𝒅 > is obtained by
replacing the page number 𝑷 by the frame number 𝑭, or in practice its layout address, containing
this page. The offset (displacement) 𝒅 in the page is the same in the frame, since both have the
same size. The frame number or address is obtained by indexing the page table of the active process
with the value 𝑷. The corresponding frame returns 𝑭, or the equivalent address.
Note : At runtime, and for a given processor, only one page table is active: that corresponding to
the process currently being executed. Each context switching operation involves changing the
active page table at processor level.
Example 3:
An address 𝑨𝑩 = 𝟏𝟎𝟏𝟎 𝟏𝟎𝟏𝟏 on 8 bits, 𝒎 = 𝟖 𝒃𝒊𝒕𝒔. We also have pages of size 𝟐𝟔 𝒘𝒐𝒓𝒅𝒔.
So, the number of pages is 𝟐𝒎−𝒏 = 𝟐𝟖−𝟔 = 𝟒.
Then, the logical address becomes a paged address < 𝑷, 𝒅 > as follows:
➢ 𝑷 = 𝟏𝟕𝟏 𝑫𝒊𝒗 𝟔𝟒 = 𝟐, ➔ The page number is 2
➢ 𝒅 = 𝟏𝟕𝟏 𝑴𝒐𝒅 𝟔𝟒 = 𝟒𝟑, ➔ 43 is the offset in page number 2.
AB @ = 1010 1011 ➔ AB @ = <2 , 43>
V.6.3.2. Segmentation
1) Principle of operation
The way in which a program is divided up, as presented in a pagination system, does not correspond
to the user's vision and way of thinking. The latter sees a program as a collection of logical units:
code, data, stack, main program, procedures, and DLL, etc., generally referred to as Segments.
Free (64 K)
CPU MM
Process P S MMU OS
…. limit base S1
Mov Ax, adr
…. S d
Logical @ S2
Segment
Table S0
< +
Yes
Physical @ S3
No
Error
➢ When an offset is legal, in which case the physical address is calculated by adding the
value of 𝒅 to the segment base.
➢ Otherwise, an addressing error is generated.
Example 4:
On a system using simple segmentation, calculate the physical address of each logical address, using
the segment table below. Assume that the addresses are decimal instead of binary.
+ limit base S2
+
F F d
S0
Physical @
STBR
Segment
Table Page Table MMU S3
(per Segment)
Figure 5.23 : Paged Segmentation Hardware.
This technique eliminates external fragmentation, it also introduces internal fragmentation.
Virtual memory is a technique that allows processes to be executed that may not be completely in
memory. It is based on the Main Memory (MM) virtualization technique (like a virtual CPU, virtual
printer, etc.). The principle is to let processes see that they have a theoretically infinite amount
of memory. Part of the program code is loaded into the MM, while the other part is stored in a
memory extension of the MM. On the other hand, virtual memory provides an extremely large
address space, whereas physical memory is limited. This is made possible by using auxiliary memory
as a workspace for loading and unloading individual pages by the OS.
To achieve this, programs in Secondary Memory (SM) are divided into a number of chunks, each
of which is smaller than the physical size of the MM. At runtime, a few chunks are loaded; as soon
New Memory
VM
Figure 5.24 : Virtual Memory
V.7.2. Demand Paging
1) Basic Concepts
The principle of virtual memory is commonly implemented with On-Demand Paging, i.e. process
pages are only loaded into Main Memory when the processor requests access to them. A demand
paging is similar to a paging system with swapping
How can the OS (Using Hardware Support) distinguish between pages in memory and
those on disk and detect their absence?
Using this technique, the OS has the means to distinguish between pages that are in memory, and
those that are on disk. It uses an additional bit 𝑽 in the page table, called the validation bit
(Valid/invalid), to describe whether the page is loaded into memory or not.
Following a reference to any 'adr' address, this is converted by the MMU into a < 𝑷, 𝒅 > pair.
➢ The 𝑷 value is used to index the page table; before converting, we must first consult the V
bit associated with it.
✓ If this bit is set to 1, this means that the page has been loaded into Main Memory, so
conversion can continue.
✓ Otherwise, the MMU generates a diversion, indicating that the process wants to access
a page that does not exist in the Main Memory. This is called a page fault.
The hardware support for demand paging is illustrated in Figure 5.26.
CPU MM
Process Pr1 PTBR
+ Page Table
V Frame @
.
Mov Ax, adr
…. P d
@ Virtual
=0 No +
MMU Yes
adr searched
Physical @
Trap = Page fault
Reference to a non-existent page in the MM
➢ As long as we have no page faults, the effective access time is equal to the memory access
time.
➢ If, however, a page fault occurs, we must first read the relevant page from disk and then
access the desired word
Let 𝒑 be the probability of a page fault (𝟎 ≤ 𝒑 ≤ 𝟏). We would expect 𝒑 to be close to zero that
is, we would expect to have only a few page faults. The effective access time is then
Page replacement takes the following approach. If no frame is free, we find one that is not currently
being used and free it. We can free a frame by writing its contents to swap space and changing the
page table (and all other tables) to indicate that the page is no longer in memory (Figure 5.28). We
can now use the freed frame to hold the page for which the process faulted. We modify the page-
fault service routine to include page replacement:
There are several different page replacement algorithms. How do we select a particular replacement
algorithm? In general, we want the one with the lowest page-fault rate. We evaluate an algorithm
by running it on a particular string of memory references and calculating the number of page faults.
The string of memory references is called a reference string. The future behavior of a program
can be controlled by knowledge of this reference string.
In order to determine the number of page faults for a particular reference string and replacement
algorithm, we also need to know the number of page frames available. Obviously, as the number
of page frames increases, the number of page faults must decrease.
Example 6:
For this example, if we trace a particular process, we might record the following address sequence:
𝟏𝟎𝟎, 𝟐𝟏𝟎, 𝟑𝟓𝟓, 𝟏𝟐𝟎, 𝟒𝟐𝟎, 𝟏𝟏𝟎, 𝟐𝟎𝟎, 𝟓𝟓𝟎, 𝟏𝟑𝟗, 𝟐𝟎𝟏, 𝟑𝟗𝟓, 𝟒𝟎𝟒, 𝟓𝟎𝟓.
At 100 bytes per page, this sequence is reduced to the following reference string:
𝟏, 𝟐, 𝟑, 𝟏, 𝟒, 𝟏, 𝟐, 𝟓, 𝟏, 𝟐, 𝟑, 𝟒, 𝟓.
Principle: The victim page will be the least referenced page in the near past.
Reason: The principle of spatial locality and temporal locality means that a recently used page has
a high chance of being referenced in the very near future. So, the victim page should be an older
page.
Principle: This is the FIFO algorithm with the following modifications: a victim page is chosen, if
it is referenced it gets a second chance and another is sought.
Frame 4 41 41 41 40 40 4+
0 31 30 30
Page Fault X X X X X X X X X X
➢ 10-Page Faults for a memory with 4 Frames!
V.7.5. Thrashing
After completing initialization, most programs operate on a small number of code and data pages
compared to the total memory the program requires. The page's most frequently accessed are called
the working set.