SPOS ASSIGNMENTS
NAME – Vash Dalal
CLASS - TE-Comp-B
ROLL NO. – 7357
ASSIGNMENT 1
TITLE
Pass 1 and Pass 2 of Assembler
OBJECTIVE
To study and understand the working, use and implementation of the pass
1 and pass 2 of a two-pass assembler and to understand the process of
code conversion in system programming.
PROBLEM STATEMENT
Design and implement pass 1 and pass 2 of two-pass assembler for
pseudo machine and generate intermediate code and symbol table for
pseudo machine instructions.
OUTCOMES
After the completion of this assignment the understanding of assembler and
its working and implementation will increase and one will be able to
implement pass 1 and pass 2 of the two pass assembler for pseudo machine
code.
SOFTWARE AND HARDWARE REQUIREMENTS
Operating system - windows/linux/mac
Java jdk installed
IDE - netbeans/eclipse
THEORY
PASS 1
pass one of the two pass assembler deals with labels and mnemonics, pass
one Separate the labels, mnemonic op code and operand fields. It determines
the storage requirement for every assembly language statement and update
the location counter. work of pass 1 is two define symbols and literals and
remember them in symbol and literal table respectively. pass one after
completion generates intermediate code and
symbol table as an output which later becomes the input for pass 2 of the
two-pass assembler for further assembly processing.
the working of the pass 1 of the two pass assembler can be defined by the
following flow diagram
pass one assembler convert lexical statements into intermediate code using
opcodes it uses respective opcode for respective category of statement and
converts the machine code into intermediate code for the implementation of
the assembler following opcode
table is been used Mnemonic Cl Opcod
opcode ass e
STOP IS 00
ADD IS 01
SUB IS 02
MULT IS 03
MOVER IS 04
MOVEM IS 05
COMP IS 06
Mnemonic Cl Opco
opcode ass de
BC IS 07
DIV IS 08
READ IS 09
PRINT IS 10
START A 01
D
END A 02
D
ORIGIN A 03
D
EQU A 04
D
LTORG A 05
D
DS D 01
L
DC D 02
L
AREG R 01
G
BREG R 02
G
CREG R 03
G
EQ C 01
C
PASS 2
the second pass or the final pass of the two-pass assembler take the output of
the first pass as an input and generates the object code for the execution. It
converts the symbolic opcode from the intermediate code into executable
machine code or object code a pass two assembler
second pass assembler also generates data for literals and look them up in
the symbol table and then uses them to generate final machine code.
CONCLUSION
this assignment helped in thickening the understanding of the pass 1 and
pass 2 of a two-pass assembler and helped in understanding the concept and
implementation of a two pass assembler. we successfully implemented a
two-pass assembler.
ASSIGNMENT 2 PART 1
TITLE
Design suitable data structures and implement pass-I of a two-pass
macro-processor using OOP features in Java
OBJECTIVES
- To understand Data structure of Pass-1 macro processor
- To understand Pass-1 macro processor concept
- To understand macro facility.
PROBLEM STATEMENT
Design suitable data structures and implement pass-I of a two-pass
macro-processor using OOP features in Java.
OUTCOMES
After completion of this assignment students will be able to:
- Implemented Pass – 1 macro processor
- Implemented MNT, MDT table.
- Understood concept Pass-1 macro processor.
SOFTWARE & HARDWARE REQUIREMENTS
OS required-Windows,Linux
IDE-Visual Studio Code,eclipse ,etc
Programming language-java
Processor-intel core i3 and above
RAM-above 4gb
THEORY
Macro:
Macro allows a sequence of source language code to be defined once &
then referred to by name each time it is referred. Each time this name
occurs in a program, the sequence of codes is substituted at that point.
Macro has following parts: -
(1) Name of macro
(2) Parameters in macro
(3) Macro Definition Parameters are optional.
How To Define a Macro: -
Macro can be formatted in following order: - ‘MACRO’ pseudo-op is
the first line of definition & identifies the following line as macro
instruction name. Following the name line is sequence of instructions
being abbreviated the instructions comprising the ‘MACRO’ instruction.
The definition is terminated by a line with MEND pseudo-op
Example of Macro: -
(1) Macro without parameters MACRO my macro ADD AREG, X ADD
BREG,X MEND System Programming & OS Laboratory Third Year
Computer Engineering.
(2) Macro with parameters MACRO add macro AS ADD AREG, &A
ADD BREG, &A MEND The macro parameters (Formal Parameters)
are initialized with ‘&’ . used as it is in operation. Formal Parameters are
those which are in definition of macro. Whereas while calling macros
we use Actual Parameters.
How To Call a Macro?
A macro is called by writing macro name with actual parameters in an
Assembly Program. Macro call leads to Macro Expansion. Syntax: []
Example: - for above definitions of macro… (1) mymcro (2) addmacro
Macro Expansion: -
Each Call to macro is replaced by its body. During Replacement, actual
parameter is used in place of formal parameter. During Macro
expansion, each statement forming the body of macro is picked up one
bye one sequentially. Each Statement inside the macro may have:
(1) An ordinary string, which is copied as it is during expansion.
(2) The name of a formal parameter which is proceeded by character
‘&’. During macro expansion an ordinary string is retained without any
modification. Formal Parameters (Strings starting with &) is replaced by
the actual parameter value.
Macro with Keyword Parameters: -
These are the methods to call macros with formal parameters. These
formal parameters are of two types.
(1) Positional Parameters: Initiated with ‘&’. Ex: - mymacro &X
(2) Keyword Parameters: Initiated with ‘&’. but has some default value.
During a call to macro, a keyword parameter is specified by its name.
Ex: - mymacro &X=A
Data Structures of Two Pass Macros:
1] Macro Name Table Pointer (MNTP) :
2] Macro Definition Table Pointer (MDTP) :
3] Macro Name Table: macro number (i.e pointer referenced from
MNTP) Name of macros MDTP (i.e points to start position to MDT)
4] Macro Definition Table: Location Counter (where MDTP points to
start position of macro) Opcode Rest (i.e it will contain the other part
than opcodes used in macro).
5] Argument List Array: Index given to parameter Name of parameter
CONCLUSION
Thus, I have implemented Pass-1 macroprocessor by producing MNT
and MDT table.
ASSIGNMENT 2 PART 2
TITLE
Write a Java program for pass-II of a two-pass macro-processor. The
output of assignment-3 (MNT, MDT and file without any macro
definitions) should be input for this assignment.
OBJECTIVES
- To understand Data structure Pass-2 macro processor
- To understand Pass-1 & Pass-2 macro processor concept
- To understand Advanced macro facility
PROBLEM STATEMENT
Write a Java program for pass-II of a two-pass macro-processor. The
output of assignment-3 (MNT, MDT and file without any macro
definitions) should be input for this assignment
OUTCOMES
After completion of this assignment students will be able to: -
Implemented Pass – 2 macro processor - Implemented machine code
from MDT and MNT table. - Understood concept of Pass-2 macro
processor.
SOFTWARE & HARDWARE REQUIREMENTS
OS required-Windows, Linux
IDE-Visual Studio Code, eclipse ,etc.
Programming language-java
Processor-intel core i3 and above
RAM-above 4gb
THEORY
Advanced Macro Facilities:
(1) AIF
(2) AGO
(3) Sequential Symbol
(4) Expansion time variable
(1) AIF
Use the AIF instruction to branch according to the results of a condition
test. You can thus alter the sequence in which source program
statements or macro definition statements are processed by the
assembler. The AIF instruction also provides loop control for
conditional assembly processing, which lets you control the sequence of
statements to be generated. It also lets you check for error conditions and
thereby to branch to the appropriate MNOTE instruction to issue an
error message.
(2) AGO
The AGO instruction branches unconditionally. You can thus alter the
sequence in which your assembler language statements are processed.
This provides you with final exits from conditional assembly loops.
(3) Sequence Symbols
You can use a sequence symbol in the name field of a statement to
branch to that statement during conditional assembly processing, thus
altering the sequence in which the assembler processes your conditional
assembly and macro instructions. You can select the model statements
from which the assembler generates assembler language statements for
processing at assembly time. A sequence symbol consists of a period (.)
followed by an alphabetic character, followed by 0 to 61 alphanumeric
characters.
Examples:
. BRANCHING_LABEL#1
.A
Sequence symbols can be specified in the name field of assembler
language statements and model statements; however, sequence symbols
must not be used as name entries in the following assembler instructions:
ALIAS EQU OPSYN SETC
AREAD ICTL SETA SETAF
CATTR LOCTR SETB SETCF
DXD
Also, sequence symbols cannot be used as name entries in macro
prototype instructions, or in any instruction that already contains an
ordinary or a variable symbol in the name field. Sequence symbols can
be specified in the operand field of an AIF or AGO instruction to branch
to a statement with the same sequence symbol as a label.
4) Expansion Time Variables
- Data Structures of Two Pass Macros:
1] Input Source Program for pass- II. It is produced by pass – I.
2] Macro Definition Table: (MDT) produced by pass - I Location
Counter (where MDTP points to start position of macro) Opcode Rest
(i.e it will contain the other part than opcodes used in macro).
3] Macro Name Table: (MNT) produced by pass - I macro number (i.e
pointer referenced from MNTP) Name of macros MDTP (i.e points to
start position to MDT)
4] MNTP (macro name table pointer) gives the number of entries in
MNT.
5] Argument List Array: Index given to parameter Name of parameter
Which gives association between integer indices & actual parameters.
6] Source Program with macro-calls expanded. This is the output of
pass- II. 7] MDTP (macro definition table pointer) gives the address of
macro definition in macro definition table.
Algorithm:
Take Input from Pass - I Examine all statements in the assembly source
program to detect macro calls. For Each Macro call:
(a) Locate the macro name in MNT.
(b) Establish correspondence between formal parameters & actual
parameters.
(c) Obtain information from MNT regarding position of the macro
definition in MDT.
(d) Expand the macro call by picking up model statements from MDT.
CONCLUSION
Thus, I have implemented Pass-2 macro processor by taking input as
output of assignment-3 (i.e., MDT and MNT table).
ASSIGNMENT 3
TITLE
Dynamic-Link Libraries
OBJECTIVES
to understand how dynamic link library works.
PROBLEM STATEMENT
Write a program to create a Dynamic Link Library for any
mathematical operation and write an application program to test it.
(Java Native Interface / Use VB or VC++).
OUTCOMES
To learn the inter-related working of two or more programs using
DLL.
SOFTWARE & HARDWARE REQUIREMENTS
OS required - Windows, Linux
IDE - Visual Studio Code, eclipse ,etc.
Programming language - java
Processor - intel core i3 and above
RAM - above 4gb
THEORY
Dynamic linking allows a module to include only the information
needed to locate an exported DLL function at load time or run time.
Dynamic linking differs from the more familiar static linking, in
which the linker copies a library function's code into each module that
calls it.
Types of Dynamic Linking
There are two methods for calling a function in a DLL:
In load-time dynamic linking, a module makes explicit calls to
exported DLL functions as if they were local functions. This requires
you to link the module with the import library for the DLL that
contains the functions. An import library supplies the system with the
information needed to load the DLL and locate the exported DLL
functions when the application is loaded.
In run-time dynamic linking, a module uses the LoadLibrary or
LoadLibraryEx function to load the DLL at run time. After the DLL is
loaded, the module calls the GetProcAddress function to get the
addresses of the exported DLL functions. The module calls the
exported DLL functions using the function pointers returned by
GetProcAddress. This eliminates the need for an import library.
DLLs and Memory Management
Every process that loads the DLL maps it into its virtual address
space. After the process loads the DLL into its virtual address, it can
call the exported DLL functions.
The system maintains a per-process reference count for each DLL.
When a thread loads the DLL, the reference count is incremented by
one. When the process terminates, or when the reference count
becomes zero (run-time dynamic linking only), the DLL is unloaded
from the virtual address space of the process.
Like any other function, an exported DLL function runs in the context
of the thread that calls it. Therefore, the following conditions apply:
The threads of the process that called the DLL can use handles opened
by a DLL function. Similarly, handles opened by any thread of the
calling process can be used in the DLL function.
The DLL uses the stack of the calling thread and the virtual address
space of the calling process.
The DLL allocates memory from the virtual address space of the
calling process.
Using Run-Time Dynamic Linking
You can use the same DLL in both load-time and run-time dynamic
linking. The following example uses the LoadLibrary function to get a
handle to the Myputs DLL (see Creating a Simple Dynamic-Link
Library). If LoadLibrary succeeds, the program uses the returned
handle in the GetProcAddress function to get the address of the DLL's
myPuts function. After calling the DLL function, the program calls
the FreeLibrary function to unload the DLL.
Because the program uses run-time dynamic linking, it is not
necessary to link the module with an import library for the DLL.
The example illustrates an important difference between run-time and
load-time dynamic linking. If the DLL is not available, the application
using load-time dynamic linking must simply terminate. The run-time
dynamic linking example, however, can respond to the error.
CONCLUSION
Successfully implemented and learned about Dynamic Link Library.
ASSIGNMENT 4
TITLE
SEMAPHORE AND MUTEX
OBJECTIVE
To understand the problem of synchronization and solving it using the concept
of semaphores and mutex
PROBLEM STATEMENT
Write and implement a program to solve Classical Problems of
Synchronization using Mutex and Semaphore
OUTCOMES
After the completion of this assignment one will have the knowledge
SOFTWARE AND HARDWARE REQUIREMENTS
Operating system - windows/linux/mac
Java jdk installed
IDE - netbeans/eclipse
THEORY
SYNCHORNIZATION IN OPERATING SYSTEMS
In an operating system there are processes which uses same piece of code and
too reduce code redundancey critical section is implemented in cpu which
holds the common code and processes which wants to access that code uses it
when they need to. supposse as an example there are two process P1 and P2
and they are accessing the same piece of code and altering a variable named 'a'
running the following code.
process(){
read(a);
a = a+5;write(a);
}
lets suppose the value of a is initially 5 and lets assume two scenario:
1. First process 1 executes first and changes the value of the variable to
10 and then the process 2 runs and then increaments the value of a and
make it 15
2. First process 1 executes and reads the value of a and then due to some
reaseon context switching happens and then process 2 starts running then
process two reads a and increament it to 10 and then process a runs and
then increaments the value again to 10 as it read the value 5
in the following two cases both the process are leaving the variable according
to them in accordance to their execution order thus the processes are racing to
produce their output this is known as race condition and it is needed to be
removed to achieve synchronization.
syncronization is a state of process execution where three conditions are met
and to achieve syncronization these three conditions are nessecary they are
mutual exclusion : access to the section should be mutually exclusive
only a limited number of process can enter the section
progress : there should not be a situation where no process is able to
enter the critical section bound wait : there should also not be a
situation where only one process is entering critical section mulitple
times and not letting others in the section.
to achieve following situaltion semaphores and mutex are used.
SEMAPHORES AND MUTEX
Semaphore is nothing but a variable which is used to achieve process
synchronization whenever a process enters the critical section it will run an
entry code and whenever it leaves the critical section it will run an exit code
semaphores are implemented in the entry section and exit section,
suppose a semaphore variable s and consider the following codes
//entry level code
entry(s){
s--;
if(s<0){
process.sleep();
}
}
else return;
//exit level code
exit(s){
s++;
if(s<0){
process.wakeup();
}
}
in the following code let the initial value of s be 1 when a process will enter it
will run entry() and then it will decrease it to zero at that time we will restrict
the process from getting into critical section if the value of semaphore is
above zero which is then it will execute the process else it will put it in ready
queue and when it will leave the critical section it will increase the value of s
and thus the new process can enter this achieves mutaul exclusion progress
and bound wait.
CONCLUSION
we succesfully implemented semaphores and mutex, in this assignment we
learned the concept of semaphores and mutex and how they can be used to
avoid the race condition and can be used to achieve synchronization.
ASSIGNMENT 5
TITLE
PROCESS SCHEDULING ALGORITHMS
OBJECTIVE
To understand and implement process scheduling Algorithms like FCFS, SJF,
priorty and round robin.
PROBLEM STATEMENT
Write a program to simulate CPU Scheduling Algorithms: FCFS, SJF, SRTF,
Priority and Round Robin.
OUTCOMES
after the completion of this assignment one will succesfully be able to
implement process scheduling algorithms preemptive and non-
preemptive, and develop the understanding of the cpu scheduling
algorithms.
SOFTWARE AND HARDWARE REQUIREMENTS
Operating system - windows/linux/mac
Java jdk installed
IDE - netbeans/eclipse
THREORY
The process scheduling is required to schedule process, so that cpu runs them
in a specefic order process scheduling is required to achieve multi
programming so that cpu can run another proces when one process is waiting
there are several cpu scheduling algorithms such as:
FCFS: FIRST COME FIRST SERVE
This algorithm is a non preemptive scheduling algorithm where cpu priortize
a process on the basis of arrival time the process which comes first will get
the cpu first and then cpu will execute it and then another process will
acquire the cpu. it is the simplest scheduling algorithm it is implemented
using a FIFO(first in first out)
queue the queue stores the processes in the order they arrives when cpu is
done with one process it will acquired by the process which is next in the
queue.
SJF: SHORTEST JOB FIRST
shortest job first algorithm state that the process with the minimum burst time
or which will take minimum time to complete will be allocated to the cpu
first. this algorithm is also a non preemptive algoritm. after the completion of
each process it checks the ready queue for a process with the minimum burst
time and then allocate it to the cpu.
SRTF: SHORTEST REMAINING TIME FIRST
This algorithm is a preemptive algorithm which is like shortest job first but
instead of completing the process it checks at every moment wether there is a
process with shorter burst time if there is it will preempt the current process in
the ready queue and will run that process. the moment there is a process with
a shorter burst time than the current process it will get preempt and the cpu
will get allocated to the process with shorter burst time.
PRIORTY SCHEDULING
this algorithm can be both preemptive and non preemptive.
non preemptive priorty algorithm runs the process on the basis of their
priorty the process with higher priorty will get the cpu first and after its
completion the process in the remaining processes with the higher priorty
will be allocated to the cpu.
preemptive priorty also runs the process on the basis of their priorty but it
also checks at every unit time wether their is a process with higher
priorty if it its finds one then it preempts the current process and it will
allocate the cpu to the process with higher priorty and will put the current
process in the ready queue.
ROUND ROBIN SCHEDULING
This is aldo a preemptive process scheduling algorithm in round robin a time
quantum is defined, and every process will run for this time quantum and
then it will get preempt to the ready queue if the process it equal to or less
then the time quantum then it will be completed and the cpu will run the next
process if the burst time of a process is greater than the time quantum it will
run for the time quantum and then it will get preempt in the ready queue for
example it the time quantum is 2 unit of time then any process will run of
maximum time of 2 unit and then if it is not completed it will be pushed in
the ready queue and the next process in the queue will be allocated to the cpu.
CONCLUSION
In this assignment we succesfully implemented various process scheduling
algorithms and developed and understanding of process scheduling and the
various algorithm used for process scheduling.
ASSIGNMENT 6
TITLE
MEMORY PLACEMENT STRATEGIES
OBJECTIVE
To understand how memory placement works in modern computing
devies and what are the different startegies for it
PROBLEM STATEMENT
Write a program to simulate Memory placement strategies – best fit, first fit,
next fit and worst fit.
OUTCOMES
After this assignment one will be able to implement memory
placement algorithms and will have an understanding of memory
placement algorithms.
SOFTWARE AND HARDWARE REQUIREMENTS
Operating system - windows/linux/mac
Java jdk installed
IDE - netbeans/eclipse
THEORY
In the operating system, the following are four common memory management
techniques. Single contiguous allocation: Simplest allocation method used by
MS-DOS. All memory (except some reserved for OS) is available to a process.
Partitioned allocation: Memory is divided into different blocks or partitions.
Each process is allocated according to the requirement. Paged memory
management: Memory is divided into fixed-sized units called page frames,
used in a virtual memory environment.
Segmented memory management: Memory is divided into different segments
(a segment is a logical grouping of the process’ data or code).In this
management, allocated memory doesn’t have to be contiguous.
Most of the operating systems (for example Windows and Linux) use
Segmentation with Paging. A process is divided into segments and individual
segments have pages.
In Partition Allocation, when there is more than one partition freely available
to accommodate a process’s request, a partition must be selected. To choose a
particular partition, a partition allocation method is needed. A partition
allocation method is considered better if it avoids internal fragmentation.
When it is time to load a process into the main memory and if there is more
than one free block of memory of sufficient size then the OS decides which
free block to allocate.
There are different Placement Algorithm:
First Fit
Best Fit
Worst Fit
First Fit In the first fit, the partition is allocated which is the first sufficient
block from the top of Main Memory. It scans memory from the beginning and
chooses the first available block that is large enough. Thus it allocates the first
hole that is large enough.
Best Fit Allocate the process to the partition which is the first smallest
sufficient partition among the free available partition. It searches the entire
list of holes to find the smallest hole whose size is greater than or equal to
the size of the process.
Worst Fit Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory. It is
opposite to the best-fit algorithm. It searches the entire list of holes to find
the largest hole and allocate it to process.
CONCLUSION
In this assignment we succesfully implemented memory placement
algorithm, and developed an understanding of memory placement
algorithm and the concept of memory placement strategies in operating
system.
ASSIGNMENT 7
TITLE
PAGE REPLACEMENT ALGORITHM
OBJECTIVE
To implement page replacement algorithm and to gain an understanding
of various page replacement algorithms.
PROBLEM STATEMENT
Write a program to simulate Page replacement algorithms.
OUTCOMES
After this assignment one will be able to implement page replacement
algorithm and will gain an understanding of page replacement
algorithm.
SOFTWARE AND HARDWARE REQUIREMENTS
Operating system - windows/linux/mac
Java jdk installed
IDE - netbeans/eclipse
THEORY
A computer system has a limited amount of memory. Adding more memory
physically is very costly. Therefore, most modern computers use a
combination of both hardware and software to allow the computer to address
more memory than the amount physically present on the system. This extra
memory is actually called Virtual Memory.
Virtual Memory is a storage allocation scheme used by the Memory
Management Unit(MMU) to compensate for the shortage of physical memory
by transferring data from RAM to disk storage. It addresses secondary
memory as though it is a part of the main memory. Virtual Memory makes the
memory appear larger than actually present which helps in the execution of
programs that are larger than the physical memory.
Virtual Memory can be implemented using two methods :
Paging
Segmentation
Paging Paging is a process of reading data from, and writing data to, the
secondary storage. It is a memory management scheme that is used to retrieve
processes from the secondary memory in the form of pages and store them in
the primary memory. The main objective of paging is to divide each process in
the form of pages of fixed size. These pages are stored in the main memory in
frames. Pages of a process are only brought from the secondary memory to the
main memory when they are needed.
When an executing process refers to a page, it is first searched in the main
memory. If it is not present in the main memory, a page fault occurs.
Page Fault is the condition in which a running process refers to a page that is
not loaded in the main memory. In such a case, the OS has to bring the page
from the secondary storage into the main memory. This may cause some pages
in the main memory to be replaced due to limited storage. A Page
Replacement Algorithm is required to decide which page needs to be replaced.
Page Replacement Algorithm Page Replacement Algorithm decides which
page to remove, also called swap out when a new page needs to be loaded
into the main memory. Page Replacement happens when a requested page is
not present in the main memory and the available space is not sufficient for
allocation to the requested page.
When the page that was selected for replacement was paged out, and
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm tries to select which pages should be replaced
so as to minimize the total number of page misses. There are many different
page replacement algorithms. These algorithms are evaluated by running
them on a particular string of memory reference and computing the number
of page faults. The fewer is the page faults the better is the algorithm for that
situation.
Some Page Replacement Algorithms :
First In First Out (FIFO)
Least Recently Used (LRU)
Optimal Page Replacement
1.First In First Out (FIFO) This is the simplest page replacement algorithm. In
this algorithm, the OS maintains a queue that keeps track of all the pages in
memory, with the oldest page at the front and the most recent page at the back.
When there is a need for page replacement, the FIFO algorithm, swaps out the
page at the front of the queue, that is the page which has been in the memory
for the longest time. Advantages
Simple and easy to implement.
Low overhead. Disadvantages
Poor performance.
Doesn’t consider the frequency of use or last used time, simply replaces the
oldest page Suffers from Belady’s Anomaly(i.e. more page faults when we
increase the number of page frames). 2.Least Recently Used (LRU) Least
Recently Used page replacement algorithm keeps track of page usage over a
short period of time. It works on the idea that the pages that have been most
heavily used in the past are most likely to be used heavily in the future too.
In LRU, whenever page replacement happens, the page which has not been
used for the longest amount of time is replaced. Advantages
Efficient. -Doesn't suffer from Belady’s Anomaly. Disadvantages
Complex Implementation.
Expensive.
Requires hardware support. 3.Optimal Page Replacement Optimal Page
Replacement algorithm is the best page replacement algorithm as it
gives the least number of page faults. It is also known as OPT,
clairvoyant replacement algorithm, or Belady’s optimal page
replacement policy.
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future, i.e., the pages in the memory which are going
to be referred farthest in the future are replaced.
This algorithm was introduced long back and is difficult to implement because
it requires future knowledge of the program behaviour. However, it is possible
to implement optimal page replacement on the second run by using the page
reference information collected on the first run. Advantages
Easy to Implement.
Simple data structures are used.
Highly efficient. Disadvantages
Requires future knowledge of the program.
Time-consuming
CONCLUSION
In this assignment we succesfully implemented page replacement algorithm
and gain an understanding of page replacement algorithm.