Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views53 pages

OS (Operation System) Notes 2

Process synchronization is essential for ensuring consistent results when multiple processes execute concurrently and share system resources. It involves mechanisms like critical sections, race conditions, and semaphores to control access to shared resources and prevent issues like deadlock and starvation. Various synchronization mechanisms, such as lock variables, test-and-set locks, turn variables, and semaphores, have different characteristics and criteria for effective synchronization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views53 pages

OS (Operation System) Notes 2

Process synchronization is essential for ensuring consistent results when multiple processes execute concurrently and share system resources. It involves mechanisms like critical sections, race conditions, and semaphores to control access to shared resources and prevent issues like deadlock and starvation. Various synchronization mechanisms, such as lock variables, test-and-set locks, turn variables, and semaphores, have different characteristics and criteria for effective synchronization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Process Synchronization

When multiple processes execute concurrently sharing system resources, then inconsistent results
might be produced.
• Process Synchronization is a mechanism that deals with the synchronization of processes.
• It controls the execution of processes running concurrently to ensure that consistent results
are produced.
Need of Synchronization-
Process synchronization is needed-
• When multiple processes execute concurrently sharing some system resources.
• To avoid the inconsistent results.
Critical Section-
Critical section is a section of the program where a process access the shared resources during its
execution.
Example-
The following illustration shows how inconsistent results may be produced if multiple processes
execute concurrently without any synchronization.
Consider-
• Two processes P1 and P2 are executing concurrently.
• Both the processes share a common variable named “count” having initial value = 5.
• Process P1 tries to increment the value of count.
• Process P2 tries to decrement the value of count.

Now, when these processes execute concurrently without synchronization, different results may
be produced.
Race Condition-
Race condition is a situation where-
• The final output produced depends on the execution order of instructions of different
processes.
• Several processes compete with each other.
The above example is a good illustration of race condition.
Critical Section-
• Process Synchronization controls the execution of processes running concurrently so as to
produce the consistent results.
• Critical section is a part of the program where shared resources are accessed by the process.
Critical Section Problem-
• If multiple processes access the critical section concurrently, then results produced might
be inconsistent.
• This problem is called as critical section problem.
Synchronization Mechanisms-

Synchronization mechanisms allow the processes to access critical section in a


synchronized manner to avoid the inconsistent results.

For every critical section in the program, a synchronization mechanism adds-


• An entry section before the critical section
• An exit section after the critical section
Entry Section-
• It acts as a gateway for a process to enter inside the critical section.
• It ensures that only one process is present inside the critical section at any time.
• It does not allow any other process to enter inside the critical section if one process is
already present inside it.
Exit Section-
• It acts as an exit gate for a process to leave the critical section.
• When a process takes exit from the critical section, some changes are made so that other
processes can enter inside the critical section.
Criteria For Synchronization Mechanisms-
Any synchronization mechanism proposed to handle the critical section problem should meet the
following criteria-
1. Mutual Exclusion
2. Progress
3. Bounded Wait
4. Architectural Neutral
1. Mutual Exclusion-
The mechanism must ensure-
• The processes access the critical section in a mutual exclusive manner.
• Only one process is present inside the critical section at any time.
• No other process can enter the critical section until the process already present inside it
completes.
2. Progress-
The mechanism must ensure-
• An entry of a process inside the critical section is not dependent on the entry of another
process inside the critical section.
• A process can freely enter inside the critical section if there is no other process present
inside it.
• A process enters the critical section only if it wants to enter.
• A process is not forced to enter inside the critical section if it does not want to enter.
3. Bounded Wait-
The mechanism should ensure-
• The wait of a process to enter the critical section is bounded.
• A process gets to enter the critical section before its wait gets over.
4. Architectural Neutral-
The mechanism should ensure-
• It can run on any architecture without any problem.
• There is no dependency on the architecture.

Important Notes-
Note-01:
• Mutual Exclusion and Progress are the mandatory criteria.
• They must be fulfilled by all the synchronization mechanisms.
Note-02:
• Bounded waiting and Architectural neutrality are the optional criteria.
• However, it is recommended to meet these criteria if possible.

Lock Variable-
• Lock variable is a synchronization mechanism.
• It uses a lock variable to provide the synchronization among the processes executing
concurrently.
• However, it completely fails to provide the synchronization.
It is implemented as-
Initially, lock value is set to 0.
• Lock value = 0 means the critical section is currently vacant and no process is present
inside it.
• Lock value = 1 means the critical section is currently occupied and a process is present
inside it.
Working-
This synchronization mechanism is supposed to work as explained in the following scenes-
Scene-01:
• Process P0 arrives.
• It executes the lock!=0 instruction.
• Since lock value is set to 0, so it returns value 0 to the while loop.
• The while loop condition breaks.
• It sets the lock value to 1 and enters the critical section.
• Now, even if process P0 gets preempted in the middle, no other process can enter the critical
section.
• Any other process can enter only after process P0 completes and sets the lock value to 0.
Scene-02:
• Another process P1 arrives.
• It executes the lock!=0 instruction.
• Since lock value is set to 1, so it returns value 1 to the while loop.
• The returned value 1 does not break the while loop condition.
• The process P1 is trapped inside an infinite while loop.
• The while loop keeps the process P1 busy until the lock value becomes 0 and its condition
breaks.
Scene-03:
• Process P0 comes out of the critical section and sets the lock value to 0.
• The while loop condition of process P1 breaks.
• It sets the lock value to 1 and enters the critical section.
• Now, even if process P1 gets preempted in the middle, no other process can enter the critical
section.
• Any other process can enter only after process P1 completes and sets the lock value to 0.

Failure of the Mechanism-


• The mechanism completely fails to provide the synchronization among the processes.
• It can not even guarantee to meet the basic criterion of mutual exclusion.
Characteristics-
The characteristics of this synchronization mechanism are-
• It can be used for any number of processes.
• It is a software mechanism implemented in user mode.
• There is no support required from the operating system.
• It is a busy waiting solution which keeps the CPU busy when the process is actually
waiting.
• It does not fulfill any criteria of synchronization mechanism.
Conclusion-
• The lock variable synchronization mechanism is a complete failure.
• Thus, it is never used.

Test and Set Lock –


• Test and Set Lock (TSL) is a synchronization mechanism.
• It uses a test and set instruction to provide the synchronization among the processes
executing concurrently.

Test-and-Set Instruction
• It is an instruction that returns the old value of a memory location and sets the memory
location value to 1 as a single atomic operation.
• If one process is currently executing a test-and-set, no other process is allowed to begin
another test-and-set until the first process test-and-set is finished.

It is implemented as-
Initially, lock value is set to 0.
• Lock value = 0 means the critical section is currently vacant and no process is present
inside it.
• Lock value = 1 means the critical section is currently occupied and a process is present
inside it.
Working-
This synchronization mechanism works as explained in the following scenes-
Scene-01:
• Process P0 arrives.
• It executes the test-and-set(Lock) instruction.
• Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value
to 1.
• The returned value 0 breaks the while loop condition.
• Process P0 enters the critical section and executes.
• Now, even if process P0 gets preempted in the middle, no other process can enter the critical
section.
• Any other process can enter only after process P0 completes and sets the lock value to 0.
Scene-02:
• Another process P1 arrives.
• It executes the test-and-set(Lock) instruction.
• Since lock value is now 1, so it returns value 1 to the while loop and sets the lock value to
1.
• The returned value 1 does not break the while loop condition.
• The process P1 is trapped inside an infinite while loop.
• The while loop keeps the process P1 busy until the lock value becomes 0 and its condition
breaks.
Scene-03:
• Process P0 comes out of the critical section and sets the lock value to 0.
• The while loop condition breaks.
• Now, process P1 waiting for the critical section enters the critical section.
• Now, even if process P1 gets preempted in the middle, no other process can enter the critical
section.
• Any other process can enter only after process P1 completes and sets the lock value to 0.
Characteristics-
The characteristics of this synchronization mechanism are-
• It ensures mutual exclusion.
• It is deadlock free.
• It does not guarantee bounded waiting and may cause starvation.
• It suffers from spin lock.
• It is not architectural neutral since it requires the operating system to support test-and-set
instruction.
• It is a busy waiting solution which keeps the CPU busy when the process is actually
waiting.

Turn Variable-
• Turn variable is a synchronization mechanism that provides synchronization among two
processes.
• It uses a turn variable to provide the synchronization.
It is implemented as-
Initially, turn value is set to 0.
• Turn value = 0 means it is the turn of process P0 to enter the critical section.
• Turn value = 1 means it is the turn of process P1 to enter the critical section.

Working-
This synchronization mechanism works as explained in the following scenes-
Scene-01:
• Process P0 arrives.
• It executes the turn!=0 instruction.
• Since turn value is set to 0, so it returns value 0 to the while loop.
• The while loop condition breaks.
• Process P0 enters the critical section and executes.
• Now, even if process P0 gets preempted in the middle, process P1 can not enter the critical
section.
• Process P1 can not enter unless process P0 completes and sets the turn value to 1.
Scene-02:
• Process P1 arrives.
• It executes the turn!=1 instruction.
• Since turn value is set to 0, so it returns value 1 to the while loop.
• The returned value 1 does not break the while loop condition.
• The process P1 is trapped inside an infinite while loop.
• The while loop keeps the process P1 busy until the turn value becomes 1 and its condition
breaks.
Scene-03:
• Process P0 comes out of the critical section and sets the turn value to 1.
• The while loop condition of process P1 breaks.
• Now, the process P1 waiting for the critical section enters the critical section and execute.
• Now, even if process P1 gets preempted in the middle, process P0 can not enter the critical
section.
• Process P0 can not enter unless process P1 completes and sets the turn value to 0.
Characteristics-
The characteristics of this synchronization mechanism are-
• It ensures mutual exclusion.
• It follows the strict alternation approach.

Strict Alternation Approach


In strict alternation approach,
• Processes have to compulsorily enter the critical section alternately whether they want it or
not.
• This is because if one process does not enter the critical section, then other process will never
get a chance to execute again.

• It does not guarantee progress since it follows strict alternation approach.


• It ensures bounded waiting since processes are executed turn wise one by one and each
process is guaranteed to get a chance.
• It ensures processes does not starve for the CPU.
• It is architectural neutral since it does not require any support from the operating system.
• It is deadlock free.
• It is a busy waiting solution which keeps the CPU busy when the process is actually
waiting.
Semaphores in OS-
• A semaphore is a simple integer variable.
• It is used to provide synchronization among multiple processes running concurrently.
Semaphores are integer variables used to address the critical section problem through two
automatic operations: wait and signal, which are crucial for process synchronization. They include
a set of procedures and non-negative integers within a waiting list.
By controlling access to shared resources, semaphores prevent critical section issues in systems
like multitasking processes. Counting semaphores allow for a random resource count.
The two atomic operations for semaphores: wait(S), which decrements S and blocks if S is non-
positive, and signals(S), which increments S, allowing process synchronization.
The definitions of wait and signal are as follows −
Wait Operation
The wait operation decrements its argument, S, if it is positive. If S is negative or zero, no operation
is performed. This operation checks the semaphore's value. If the value is greater than 0, the
process continues and S is decremented by 1. If the value is 0, the process is blocked(waits) until
S becomes positive.
wait(S) {
while (S <= 0);
S--;
}
Signal Operation
The signal operation increments its argument, S. After a process finishes using the shared resource,
it performs the signal operation, which increases the semaphore's value by 1, potentially
unblocking other waiting processes and allowing them to access the resource.
signal(S) {
S++;
}

Types of Semaphores
There are two main types of semaphores: counting semaphores and binary semaphores. Details
about these are as follows−
• Counting Semaphores: These integer-value semaphores have an unrestricted value
domain. They are used to coordinate resource access, with the semaphore count
representing the number of available resources. If resources are added, the semaphore
count is incremented automatically; if resources are removed, the count is decremented.
• Binary Semaphores: Binary semaphores are similar to counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when the semaphore is 0.
Implementing binary semaphores is sometimes easier than counting semaphores.
Working of Semaphore
A semaphore is a simple yet powerful synchronization tool used to manage access to shared
resources in a system with multiple processes. It works by maintaining a counter that controls
access to a specific resource, ensuring that no more than the allowed number of processes access
the resource at the same time.
There are two primary operations that a semaphore can perform:
1. Wait (P operation): This operation checks the semaphore’s value. If the value is greater
than 0, the process is allowed to continue, and the semaphore’s value is decremented by 1.
If the value is 0, the process is blocked (waits) until the semaphore value becomes greater
than 0.
2. Signal (V operation): After a process is done using the shared resource, it performs the
signal operation. This increments the semaphore’s value by 1, potentially unblocking other
waiting processes and allowing them to access the resource.
Now let us see how it does so. First, look at two operations that can be used to access and change
the value of the semaphore variable.

A critical section is surrounded by both operations to implement process synchronization. The


below image demonstrates the basic mechanism of how semaphores are used to control access to
a critical section in a multi-process environment, ensuring that only one process can access the
shared resource at a time

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a
semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of
semaphore s becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0,
this can only happen when P1 finishes its critical section and calls V operation on semaphore s.
This way mutual exclusion is achieved. Look at the below image for details which is a Binary
semaphore.
The description above is for binary semaphore which can take only two values 0 and 1 and ensure
mutual exclusion. There is one other type of semaphore called counting semaphore which can take
values greater than one.
Now suppose there is a resource whose number of instances is 4. Now we initialize S = 4 and the
rest is the same as for binary semaphore. Whenever the process wants that resource it calls P or
waits for function and when it is done it calls V or signal function. If the value of S becomes zero
then a process has to wait until S becomes positive. For example, Suppose there are 4 processes
P1, P2, P3, and P4, and they all call wait operation on S(initialized with 4). If another process P5
wants the resource then it should wait until one of the four processes calls the signal function and
the value of semaphore becomes positive.
Limitations
• One of the biggest limitations of semaphore is priority inversions.
• Deadlock, suppose a process is trying to wake up another process that is not in a sleep state.
Therefore, a deadlock may be blocked indefinitely.
• The operating system has to keep track of all calls to wait and signal the semaphore.
The main problem with semaphores is that they require busy waiting, If a process is in the critical
section, then other processes trying to enter the critical section will be waiting until the critical
section is not occupied by any process. Whenever any process waits then it continuously checks
for semaphore value (look at this line while (s==0); in P operation) and wastes CPU cycle.
Uses of Semaphores
• Mutual Exclusion : Semaphore ensures that only one process accesses a shared resource
at a time.
• Process Synchronization : Semaphore coordinates the execution order of multiple
processes.
• Resource Management : Limits access to a finite set of resources, like printers, devices,
etc.
• Reader-Writer Problem : Allows multiple readers but restricts the writers until no reader
is present.
• Avoiding Deadlocks : Prevents deadlocks by controlling the order of allocation of
resources.
Advantages of Semaphores
• Semaphore is a simple and effective mechanism for process synchronization
• Supports coordination between multiple processes. By controlling the access to critical
sections, semaphores help in managing multiple processes without them interfering with
each other.
• When used correctly, semaphores can help avoid deadlocks by managing access to
resources efficiently and ensuring that no process is indefinitely blocked from accessing
necessary resources.
• Semaphores help prevent race conditions by ensuring that only one process can access a
shared resource at a time.
• Provides a flexible and robust way to manage shared resources.
Disadvantages of Semaphores
• It Can lead to performance degradation due to overhead associated with wait and signal
operations.
• If semaphores are not managed carefully, they can lead to deadlock. This often occurs when
semaphores are not released properly or when processes acquire semaphores in an
inconsistent order.
• It can cause performance issues in a program if not used properly.
• It can be difficult to debug and maintain. Debugging systems that rely heavily on
semaphores can be challenging, as it is hard to track the state of each semaphore and ensure
that all processes are correctly synchronized
• It can be prone to race conditions and other synchronization problems if not used correctly.
• It can be vulnerable to certain types of attacks, such as denial of service attacks.

Deadlock in OS-
Deadlock is a situation where-
• The execution of two or more processes is blocked because each process holds some
resource and waits for another resource held by some other process.
Example-

Here
• Process P1 holds resource R1 and waits for resource R2 which is held by process P2.
• Process P2 holds resource R2 and waits for resource R1 which is held by process P1.
• None of the two processes can complete and release their resource.
• Thus, both the processes keep waiting infinitely.
Conditions For Deadlock-
There are following 4 necessary conditions for the occurrence of deadlock-
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait
1. Mutual Exclusion-
By this condition,
• There must exist at least one resource in the system which can be used by only one process
at a time.
• If there exists no such resource, then deadlock will never occur.
• Printer is an example of a resource that can be used by only one process at a time.
2. Hold and Wait-
By this condition,
• There must exist a process which holds some resource and waits for another resource held
by some other process.
3. No Preemption-
By this condition,
• Once the resource has been allocated to the process, it can not be preempted.
• It means resource can not be snatched forcefully from one process and given to the other
process.
• The process must release the resource voluntarily by itself.
4. Circular Wait-
By this condition,
• All the processes must wait for the resource in a cyclic manner where the last process waits
for the resource held by the first process.
Here,
• Process P1 waits for a resource held by process P2.
• Process P2 waits for a resource held by process P3.
• Process P3 waits for a resource held by process P4.
• Process P4 waits for a resource held by process P1.
Important Note-
• All these 4 conditions must hold simultaneously for the occurrence of deadlock.
• If any of these conditions fail, then the system can be ensured deadlock free.

Deadlock Handling-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
Deadlock Prevention-
• This strategy involves designing a system that violates one of the four necessary conditions
required for the occurrence of deadlock.
• This ensures that the system remains free from the deadlock.
The various conditions of deadlock occurrence may be violated as-
1. Mutual Exclusion-
• To violate this condition, all the system resources must be such that they can be used in a
shareable mode.
• In a system, there are always some resources which are mutually exclusive by nature.
• So, this condition cannot be violated.
2. Hold and Wait-
This condition can be violated in the following ways-
Approach-01:
• A process has to first request for all the resources it requires for execution.
• Once it has acquired all the resources, only then it can start its execution.
• This approach ensures that the process does not hold some resources and wait for other
resources.
Drawbacks-
The drawbacks of this approach are-
• It is less efficient.
• It is not implementable since it is not possible to predict in advance which resources will
be required during execution.
Approach-02:
In this approach,
• A process is allowed to acquire the resources it desires at the current moment.
• After acquiring the resources, it start its execution.
• Now before making any new request, it has to compulsorily release all the resources that it
holds currently.
• This approach is efficient and implementable.
Approach-03:
In this approach,
• A timer is set after the process acquires any resource.
• After the timer expires, a process has to compulsorily release the resource.
3. No Preemption-
• This condition can by violated by forceful preemption.
• Consider a process is holding some resources and request other resources that can not be
immediately allocated to it.
• Then, by forcefully preempting the currently held resources, the condition can be violated.
A process is allowed to forcefully preempt the resources possessed by some other process only if-
• It is a high priority process or a system process.
• The victim process is in the waiting state.

4. Circular Wait-
• This condition can be violated by not allowing the processes to wait for resources in a
cyclic manner.
• To violate this condition, the following approach is followed-
Approach-
• A natural number is assigned to every resource.
• Each process is allowed to request for the resources either in only increasing or only
decreasing order of the resource number.
• In case increasing order is followed, if a process requires a lesser number resource, then it
must release all the resources having larger number and vice versa.
• This approach is the most practical approach and implementable.
• However, this approach may cause starvation but will never lead to deadlock.
Deadlock Avoidance-
• This strategy involves maintaining a set of data using which a decision is made whether to
entertain the new request or not.
• If entertaining the new request causes the system to move in an unsafe state, then it is
discarded.
• This strategy requires that every process declares its maximum requirement of each
resource type in the beginning.
• The main challenge with this approach is predicting the requirement of the processes before
execution.
• Banker’s Algorithm is an example of a deadlock avoidance strategy.
Deadlock Detection and Recovery-
• This strategy involves waiting until a deadlock occurs.
• After deadlock occurs, the system state is recovered.
• The main challenge with this approach is detecting the deadlock.
Deadlock Ignorance-
• This strategy involves ignoring the concept of deadlock and assuming as if it does not exist.
• This strategy helps to avoid the extra overhead of handling deadlock.
• Windows and Linux use this strategy and it is the most widely used method.
• It is also called as Ostrich approach.

Banker’s Algorithm-
• Banker’s Algorithm is a deadlock avoidance strategy.
• It is called so because it is used in banking systems to decide whether a loan can be granted
or not.
Prerequisite-
Banker’s Algorithm requires-
• Whenever a new process is created, it specifies the maximum number of instances of each
resource type that it exactly needs.
Data Structures Used-
To implement banker’s algorithm, following four data structures are used-
1. Available
2. Max
3. Allocation
4. Need

Data
Definition Example
Structure

It is a single dimensional array that Available[R1] = K


Available specifies the number of instances of each It means K instances of resource type R1 are
resource type currently available. currently available.
It is a two-dimensional array that specifies Max[P1][R1] = K
Max the maximum number of instances of each It means process P1 is allowed to ask for
resource type that a process can request. maximum K instances of resource type R1.

It is a two-dimensional array that specifies Allocation[P1][R1] = K


Allocation the number of instances of each resource It means K instances of resource type R1
type that has been allocated to the process. have been allocated to the process P1.

It is a two-dimensional array that specifies Need[P1][R1] = K


Need the number of instances of each resource It means process P1 requires K more
type that a process requires for execution. instances of resource type R1 for execution.

Working-
• Banker’s Algorithm is executed whenever any process puts forward the request for
allocating the resources.
• It involves the following steps-
Step-01:
• Banker’s Algorithm checks whether the request made by the process is valid or not.
• If the request is invalid, it aborts the request.
• If the request is valid, it follows step-02.

Valid Request
A request is considered valid if and only if-
The number of requested instances of each resource type is less than the need declared by the
process in the beginning.

Step-02:
• Banker’s Algorithm checks if the number of requested instances of each resource type is
less than the number of available instances of each type.
• If the sufficient number of instances are not available, it asks the process to wait longer.
• If the sufficient number of instances are available, it follows step-03.
Step-03:
• Banker’s Algorithm makes an assumption that the requested resources have been allocated
to the process.
• Then, it modifies its data structures accordingly and moves from one state to the other state.
Available = Available - Request(i)
Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) - Request(i)
• Now, Banker’s Algorithm follows the safety algorithm to check whether the resulting state
it has entered in is a safe state or not.
• If it is a safe state, then it allocates the requested resources to the process in actual.
• If it is an unsafe state, then it rollbacks to its previous state and asks the process to wait
longer.

Safe State
A system is said to be in safe state when-
All the processes can be executed in some arbitrary sequence with the available number of resources.

Safety Algorithm Data Structures-


To implement safety algorithm, following two data structures are used-
1. Work
2. Finish

Data Structure Definition Example

It is a single dimensional array that Work[R1] = K


specifies the number of instances of
Work It means K instances of resource
each resource type currently
available. type R1 are currently available.
It is a single dimensional array that Finish[P1] = 0
Finish specifies whether the process has It means process P1 is still left to
finished its execution or not. execute.

Safety Algorithm-
Safety Algorithm is executed to check whether the resultant state after allocating the resources is
safe or not.
Step-01:
Initially-
• Number of instances of each resource type currently available = Available
• All the processes are to be executed.
• So, in Step-01, the data structures are initialized as-
Work = Available
Finish(i) = False for i = 0, 1, 2, ..., n-1
Step-02:
• Safety Algorithm looks for an unfinished process whose need is less than or equal to work.
• So, Step-02 finds an index i such that-
Finish[ i ] = False
Need(i) <= Work.
• If such a process exists, then step-03 is followed otherwise step-05 is followed.
Step-03:
• After finding the required process, safety algorithm assumes that the requested resources
are allocated to the process.
• The process runs, finishes its execution and the resources allocated to it gets free.
• The resources are then added to the work and finish(i) of that process is set as true.
Work = Work + Allocation
Finish(i) = True
Step-04:
• The loop of Step-02 and Step-03 is repeated.
Step-05:
• If all the processes can be executed in some sequence, then the system is said to be in a
safe state.
• In other words, if Finish(i) becomes True for all i, then the system is in a safe state otherwise
not.
Problem-01:

A single processor system has three resource types X, Y and Z, which are shared by three processes.
There are 5 units of each resource type. Consider the following scenario, where the column alloc
denotes the number of units of each resource type allocated to each process, and the column request
denotes the number of units of each resource type requested by a process in order to complete
execution. Which of these processes will finish LAST?
1. P0
2. P1
3. P2
4. None of the above since the system is in a deadlock

Alloc Request

X Y Z X Y Z

P0 1 2 1 1 0 3

P1 2 0 1 0 1 2

P2 2 2 1 1 2 0

Solution-
According to question-
• Total = [ X Y Z ] = [ 5 5 5 ]
• Total _Alloc = [ X Y Z ] = [5 4 3]
Now,
Available
= Total – Total_Alloc
= [ 5 5 5 ] – [5 4 3]
=[012]
Step-01:

• With the instances available currently, only the requirement of the process P1 can be
satisfied.
• So, process P1 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by it.
Then,
Available
= [ 0 1 2 ] + [ 2 0 1]
=[213]

Step-02:
• With the instances available currently, only the requirement of the process P0 can be
satisfied.
• So, process P0 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[213]+[121]
=[334]
Step-03:
• With the instances available currently, the requirement of the process P2 can be satisfied.
• So, process P2 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[334]+[221]
=[555]
Thus,
• There exists a safe sequence P1, P0, P2 in which all the processes can be executed.
• So, the system is in a safe state.
• Process P2 will be executed at last.
Thus, Option (C) is correct.

Resource Allocation Graph-

Resource Allocation Graph (RAG) is a graph that represents the state of a system pictorially.

It gives complete information about the state of a system such as-


• How many processes exist in the system?
• How many instances of each resource type exist?
• How many instances of each resource type are allocated?
• How many instances of each resource type are still available?
• How many instances of each resource type are held by each process?
• How many instances of each resource type does each process need for execution?
Components Of RAG-
There are two major components of a Resource Allocation Graph-
1. Vertices
2. Edges
Vertices-
There are following types of vertices in a Resource Allocation Graph-
1. Process Vertices
2. Resource Vertices
Process Vertices-
• Process vertices represent the processes.
• They are drawn as a circle by mentioning the name of process inside the circle.
Resource Vertices-
• Resource vertices represent the resources.
• Depending on the number of instances that exists in the system, resource vertices may be
single instance or multiple instance.
• They are drawn as a rectangle by mentioning the dots inside the rectangle.
• The number of dots inside the rectangle indicates the number of instances of that resource
existing in the system.
Edges-
There are two types of edges in a Resource Allocation Graph-
1. Assign Edges
2. Request Edges
Assign Edges-
• Assign edges represent the assignment of resources to the processes.
• They are drawn as an arrow where the head of the arrow points to the process and tail of
the process points to the instance of the resource.
Request Edges-
• Request edges represent the waiting state of processes for the resources.
• They are drawn as an arrow where the head of the arrow points to the instance of the
resource and tail of the process points to the process.
• If a process requires ‘n’ instances of a resource type, then ‘n’ assign edges will be drawn.
Example Of RAG-
The following diagram represents a Resource Allocation Graph-
It gives the following information-
• There exist three processes in the system namely P1, P2 and P3.
• There exist two resources in the system namely R1 and R2.
• There exists a single instance of resource R1 and two instances of resource R2.
• Process P1 holds one instance of resource R1 and is waiting for an instance of resource R2.
• Process P2 holds one instance of resource R2 and is waiting for an instance of resource R1.
• Process P3 holds one instance of resource R2 and is not waiting for anything.
Memory Management
Memory management is the functionality of an operating system which handles or manages
primary memory and moves processes back and forth between main memory and disk during
execution. Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.

Contiguous Memory Allocation-


• Contiguous memory allocation is a memory allocation technique.
• It allows to store the process only in a contiguous fashion.
• Thus, entire process has to be stored as a single entity at one place inside the memory.
Techniques-
There are two popular techniques used for contiguous memory allocation-

1. Static Partitioning
2. Dynamic Partitioning
Static Partitioning-
• Static partitioning is a fixed size partitioning scheme.
• In this technique, main memory is pre-divided into fixed size partitions.
• The size of each partition is fixed and can not be changed.
• Each partition is allowed to store only one process.
This is the oldest and simplest technique used to put more than one process in the main memory.
In this partitioning, the number of partitions (non-overlapping) in RAM is fixed but the size of
each partition may or may not be the same. As it is a contiguous allocation, hence no spanning is
allowed. Here partitions are made before execution or during system configure.
As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.

Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.

Advantages of Fixed Partitioning


• Easy to implement: The algorithms required are simple and straightforward.
• Low overhead: Requires minimal system resources to manage, ideal for resource-
constrained systems.
• Predictable: Memory allocation is predictable, with each process receiving a fixed
partition.
• No external fragmentation: Since the memory is divided into fixed partitions and
no spanning is allowed, external fragmentation is avoided.
• Suitable for systems with a fixed number of processes: Ideal for systems where
the number of processes and their memory requirements are known in advance.
• Prevents process interference: Ensures that processes do not interfere with each
other’s memory, improving system stability.
• Efficient memory use: Particularly in systems with fixed, known processes
and batch processing scenarios.
• Good for batch processing: Works well in environments where the number of
processes remains constant over time.
• Better control over memory allocation: The operating system has clear control
over how memory is allocated and managed.
• Easy to debug: Fixed Partitioning is easy to debug since the size and location of
each process are predetermined.

Disadvantages of Fixed Partitioning


1. Internal Fragmentation: Main memory use is inefficient. Any program, no matter
how small, occupies an entire partition. This can cause internal fragmentation.
2. Limit process size: Process of size greater than the size of the partition in Main
Memory cannot be accommodated. The partition size cannot be varied according to
the size of the incoming process size. Hence, the process size of 32MB in the above-
stated example is invalid.
3. Limitation on Degree of Multiprogramming: Partitions in Main Memory are
made before execution or during system configure. Main Memory is divided into a
fixed number of partitions. Suppose if there are partitions in RAM and are the number
of processes, then n2<=n1 n2<=n1 condition must be fulfilled. Number of
processes greater than the number of partitions in RAM is invalid in Fixed
Partitioning.

Dynamic Partitioning-
• Dynamic partitioning is a variable size partitioning scheme.
• It performs the allocation dynamically.
• When a process arrives, a partition of size equal to the size of process is created.
• Then, that partition is allocated to the process.
Partition Allocation Algorithms-
• The processes arrive and leave the main memory.
• As a result, holes of different size are created in the main memory.
• These holes are allocated to the processes that arrive in future.

Partition allocation algorithms are used to decide which hole should be allocated to the
arrived process.

Popular partition allocation algorithms are-


1. First Fit Algorithm
2. Best Fit Algorithm
3. Worst Fit Algorithm
1. First Fit Algorithm-
This algorithm starts scanning the partitions serially from the starting.
• When an empty partition that is big enough to store the process is found, it is allocated to
the process.
• Obviously, the partition size has to be greater than or at least equal to the process size.
2. Best Fit Algorithm-
• This algorithm first scans all the empty partitions.
• It then allocates the smallest size partition to the process.
3. Worst Fit Algorithm-
• This algorithm first scans all the empty partitions.
• It then allocates the largest size partition to the process.
Important Points-
Point-01:
For static partitioning,
• Best Fit Algorithm works best.
• This is because space left after the allocation inside the partition is of very small size.
• Thus, internal fragmentation is least.
Point-02:
For static partitioning,
• Worst Fit Algorithm works worst.
• This is because space left after the allocation inside the partition is of very large size.
• Thus, internal fragmentation is maximum.

Internal Fragmentation
It occurs when the space is left inside the partition after allocating the partition to a process.
• This space is called as internally fragmented space.
• This space can not be allocated to any other process.
• This is because only static partitioning allows to store only one process in each
partition.
• Internal Fragmentation occurs only in static partitioning.
External Fragmentation
• It occurs when the total amount of empty space required to store the process is available
in the main memory.
• But because the space is not contiguous, so the process can not be stored.

Non-Contiguous Memory Allocation-


Non-contiguous memory allocation is a memory allocation technique.
• It allows to store parts of a single process in a non-contiguous fashion.
• Thus, different parts of the same process can be stored at different places in the main
memory.
Techniques-
There are two popular techniques used for non-contiguous memory allocation
1. Paging
2. Segmentation
Paging-

• Paging is a fixed size partitioning scheme.


• In paging, secondary memory and main memory are divided into equal fixed size partitions.
• The partitions of secondary memory are called as pages.
• The partitions of main memory are called as frames.

• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon their
availability.
Example-
Consider a process is divided into 4 pages P0, P1, P2 and P3.
• Depending upon the availability, these pages may be stored in the main memory frames in
a non-contiguous fashion as shown-
Translating Logical Address into Physical Address-
• CPU always generates a logical address.
• A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-
Step-01:
CPU generates a logical address consisting of two parts-
1. Page Number
2. Page Offset

• Page Number specifies the specific page of the process from which CPU wants to read the
data.
• Page Offset specifies the specific word on the page that CPU wants to read.
Step-02:
For the page number generated by the CPU,
• Page Table provides the corresponding frame number (base address of the frame) where
that page is stored in the main memory.
Step-03:
• The frame number combined with the page offset forms the required physical address.
• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.
Diagram-
The following diagram illustrates the above steps of translating logical address into
physical address-

Advantages-
The advantages of paging are-
• It allows to store parts of a single process in a non-contiguous fashion.
• It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
• It suffers from internal fragmentation.
• There is an overhead of maintaining a page table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are
required.

Page Table-
• Page table is a data structure.
• It maps the page number referenced by the CPU to the frame number where that page is
stored.
Characteristics-
• Page table is stored in the main memory.
• Number of entries in a page table = Number of pages in which the process is divided.
• Page Table Base Register (PTBR) contains the base address of page table.
• Each process has its own independent page table.
Working-

• Page Table Base Register (PTBR) provides the base address of the page table.
• The base address of the page table is added with the page number referenced by the CPU.
• It gives the entry of the page table containing the frame number where the referenced page
is stored.

Page Table Entry-


A page table entry contains several information about the page.
• The information contained in the page table entry varies from operating system to operating
system.
• The most important information in a page table entry is frame number
In general, each entry of a page table contains the following information-

1. Frame Number-
• Frame number specifies the frame where the page is stored in the main memory.
• The number of bits in frame number depends on the number of frames in the main memory.
2. Present / Absent Bit-
• This bit is also sometimes called as valid / invalid bit.
• This bit specifies whether that page is present in the main memory or not.
• If the page is not present in the main memory, then this bit is set to 0 otherwise set to 1.

NOTE
• If the required page is not present in the main memory, then it is called as Page Fault.
• A page fault requires page initialization.
• The required page has to be initialized (fetched) from the secondary memory and brought
into the main memory.

3. Protection Bit-
• This bit is also sometimes called as “Read / Write bit“.
• This bit is concerned with the page protection.
• It specifies the permission to perform read and write operation on the page.
• If only read operation is allowed to be performed and no writing is allowed, then this bit is
set to 0.
• If both read and write operation are allowed to be performed, then this bit is set to 1
4. Reference Bit-
• Reference bit specifies whether that page has been referenced in the last clock cycle or not.
• If the page has been referenced recently, then this bit is set to 1 otherwise set to 0.

NOTE
• Reference bit is useful for page replacement policy.
• A page that has not been referenced recently is considered a good candidate for page
replacement in LRU page replacement policy.

5. Caching Enabled / Disabled-


• This bit enables or disables the caching of page.
• Whenever freshness in the data is required, then caching is disabled using this bit.
• If caching of the page is disabled, then this bit is set to 1 otherwise set to 0.
6. Dirty Bit-
• This bit is also sometimes called as “Modified bit“.
• This bit specifies whether that page has been modified or not.
• If the page has been modified, then this bit is set to 1 otherwise set to 0.

NOTE
In case the page is modified,
• Before replacing the modified page with some other page, it has to be written back
in the secondary memory to avoid losing the data.
• Dirty bit helps to avoid unnecessary writes.
• This is because if the page is not modified, then it can be directly replaced by another
page without any need of writing it back to the disk.

Disadvantage Of Paging-
One major disadvantage of paging is-
• It increases the effective access time due to increased number of memory accesses.
• One memory access is required to get the frame number from the page table.
• Another memory access is required to get the word from the page.
Translation Lookaside Buffer-
• Translation Lookaside Buffer (TLB) is a solution that tries to reduce the effective access
time.
• Being a hardware, the access time of TLB is very less as compared to the main memory.
Structure-
Translation Lookaside Buffer (TLB) consists of two columns-
1. Page Number
2. Frame Number

Translating Logical Address into Physical Address-


In a paging scheme using TLB,
The logical address generated by the CPU is translated into the physical address using following
steps-
Step-01:
CPU generates a logical address consisting of two parts-
1. Page Number
2. Page Offset
Step-02:
• TLB is checked to see if it contains an entry for the referenced page number.
• The referenced page number is compared with the TLB entries all at once.
Now, two cases are possible-
Case-01: If there is a TLB hit-
• If TLB contains an entry for the referenced page number, a TLB hit occurs.
• In this case, TLB entry is used to get the corresponding frame number for the referenced
page number.
Case-02: If there is a TLB miss-
• If TLB does not contain an entry for the referenced page number, a TLB miss occurs.
• In this case, page table is used to get the corresponding frame number for the referenced
page number.
• Then, TLB is updated with the page number and frame number for future references.
Step-03:
• After the frame number is obtained, it is combined with the page offset to generate the
physical address.
• Then, physical address is used to read the required word from the main memory
Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-

Flowchart-
The following flowchart illustrates the above steps of translating logical address into physical
address-
Important Points-
Point-01:
• Unlike page table, there exists only one TLB in the system.
• So, whenever context switching occurs, the entire content of TLB is flushed and deleted.
• TLB is then again updated with the currently running process.
Point-02:
When a new process gets scheduled-
• Initially, TLB is empty. So, TLB misses are frequent.
• With every access from the page table, TLB is updated.
• After some time, TLB hits increases and TLB misses reduces.
Point-03:
The time taken to update TLB after getting the frame number from the page table is negligible.
• Also, TLB is updated in parallel while fetching the word from the main memory.
Advantages-
The advantages of using TLB are-
• TLB reduces the effective access time.
• Only one memory access is required when TLB hit occurs.
Disadvantages-
A major disadvantage of using TLB is-
• After some time of running the process, when TLB hits increases and process starts to run
smoothly, a context switching occurs.
• The entire content of the TLB is flushed.
• Then, TLB is again updated with the currently running process.
This happens again and again.
Other disadvantages are-
• TLB can hold the data of only one process at a time.
• When context switches occur frequently, the performance of TLB degrades due to low hit
ratio.
• As it is a special hardware, it involves additional cost.

Multilevel Paging-

Multilevel paging is a paging scheme where there exists a hierarchy of page tables.

Need –
The need for multilevel paging arises when-
• The size of page table is greater than the frame size.
• As a result, the page table can not be stored in a single frame in main memory.
Working-
In multilevel paging,
• The page table having size greater than the frame size is divided into several parts.
• The size of each part is same as frame size except possibly the last part.
• The pages of page table are then stored in different frames of the main memory.
• To keep track of the frames storing the pages of the divided page table, another page table
is maintained.
• As a result, the hierarchy of page tables get generated.
• Multilevel paging is done till the level is reached where the entire page table can be stored
in a single frame.

Important Points-
• At any level, the page table entry size of any page table will always be same because each
entry points to the frame number.
• When there is only one level of paging, there is only one page table whose size is less than
or equal to page size.
• All the page tables are completely filled except possibly the last page.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating System decides which
memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging
happens whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than required
pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to
read in from disk, and this requires for I/O completion. This process determines the quality of the
page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided
by hardware, and tries to select which pages should be replaced to minimize the total number of
page misses, while balancing it with the costs of primary storage and processor time of the
algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm
by running it on a particular string of memory reference and computing the number of page faults

Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference. The
latter choice produces a large number of data, where we note two things.
• For a given page size, we need to consider only the page number, not the entire address.
• If we have a reference to a page p, then any immediately following references to
page p will never cause a page fault. Page p will be in memory after the first reference; the
immediately following references will not fault.
• For example, consider the following sequence of addresses − 123,215,600,1234,76,96
• If page size is 100, then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm


• Oldest page in main memory is the one which will be selected for replacement.
• Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm


• An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.
• Replace the page that will not be used for the longest period of time. Use the time when a
page is to be used.
Least Recently Used (LRU) algorithm
• Page which has not been used for the longest time in main memory is the one which will
be selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.

Segmentation
• Like Paging, Segmentation is another non-contiguous memory allocation technique.
• In segmentation, process is not divided blindly into fixed size pages.
• Rather, the process is divided into modules for better visualization.
Characteristics-
• Segmentation is a variable size partitioning scheme.
• In segmentation, secondary memory and main memory are divided into partitions of
unequal size.
• The size of partitions depend on the length of modules.
• The partitions of secondary memory are called as segments.
Example-
Consider a program is divided into 5 segments as-

Segment Table-
• Segment table is a table that stores the information about each segment of the process.
• It has two columns.
• First column stores the size or length of the segment.
• Second column stores the base address or starting address of the segment in the main
memory.
• Segment table is stored as a separate segment in the main memory.
• Segment table base register (STBR) stores the base address of the segment table.
For the above illustration, consider the segment table is-
Here,
• Limit indicates the length or size of the segment.
• Base indicates the base address or starting address of the segment in the main memory.
In accordance to the above segment table, the segments are stored in the main memory as-

Translating Logical Address into Physical Address-


• CPU always generates a logical address.
• A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-
Step-01:
CPU generates a logical address consisting of two parts-
1. Segment Number
2. Segment Offset
• Segment Number specifies the specific segment of the process from which CPU wants to
read the data.
• Segment Offset specifies the specific word in the segment that CPU wants to read.
Step-02:
• For the generated segment number, corresponding entry is located in the segment table.
• Then, segment offset is compared with the limit (size) of the segment.
Now, two cases are possible-
Case-01: Segment Offset >= Limit
• If segment offset is found to be greater than or equal to the limit, a trap is generated.

Case-02: Segment Offset < Limit


• If segment offset is found to be smaller than the limit, then request is treated as a valid
request.
• The segment offset must always lie in the range [0, limit-1],
• Then, segment offset is added with the base address of the segment.
• The result obtained after addition is the address of the memory location storing the required
word.
Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-
Advantages-
The advantages of segmentation are-
• It allows to divide the program into modules which provides better visualization.
• Segment table consumes less space as compared to Page Table in paging.
• It solves the problem of internal fragmentation.
Disadvantages-
The disadvantages of segmentation are-
• There is an overhead of maintaining a segment table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are
required.
• Segments of unequal size are not suited for swapping.
• It suffers from external fragmentation as the free space gets broken down into smaller
pieces with the processes being loaded and removed from the main memory.
Segmented Paging-

Segmented paging is a scheme that implements the combination of segmentation and paging.

Working-
In segmented paging,
• Process is first divided into segments and then each segment is divided into pages.
• These pages are then stored in the frames of main memory.
• A page table exists for each segment that keeps track of the frames storing the pages of that
segment.
• Each page table occupies one frame in the main memory.
• Number of entries in the page table of a segment = Number of pages that segment is
divided.
• A segment table exists that keeps track of the frames storing the page tables of segments.
• Number of entries in the segment table of a process = Number of segments that process is
divided.
• The base address of the segment table is stored in the segment table base register.
Translating Logical Address into Physical Address-
• CPU always generates a logical address.
• A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-
Step-01:
CPU generates a logical address consisting of three parts-
1. Segment Number
2. Page Number
3. Page Offset
• Segment Number specifies the specific segment from which CPU wants to reads the data.
• Page Number specifies the specific page of that segment from which CPU wants to read
the data.
• Page Offset specifies the specific word on that page that CPU wants to read.
Step-02:
• For the generated segment number, corresponding entry is located in the segment table.
• Segment table provides the frame number of the frame storing the page table of the referred
segment.
• The frame containing the page table is located.
Step-03:
• For the generated page number, corresponding entry is located in the page table.
• Page table provides the frame number of the frame storing the required page of the referred
segment.
• The frame containing the required page is located.
Step-04:
• The frame number combined with the page offset forms the required physical address.
• For the generated page offset, corresponding word is located in the page and read.

Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-
Advantages-
The advantages of segmented paging are-
• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of Page Table is limited by the segment size.
• It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of segmented paging are-
• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.

You might also like