Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views22 pages

Os Material Unit-2

This document covers process synchronization and deadlocks in operating systems, focusing on the critical section problem, semaphores, and classical synchronization problems like the producer-consumer and readers-writers problems. It explains key concepts such as mutual exclusion, progress, and bounded waiting, along with solutions like Peterson's algorithm and monitors. Additionally, it discusses deadlocks, their characterization, and methods for handling them, including prevention, avoidance, detection, and recovery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views22 pages

Os Material Unit-2

This document covers process synchronization and deadlocks in operating systems, focusing on the critical section problem, semaphores, and classical synchronization problems like the producer-consumer and readers-writers problems. It explains key concepts such as mutual exclusion, progress, and bounded waiting, along with solutions like Peterson's algorithm and monitors. Additionally, it discusses deadlocks, their characterization, and methods for handling them, including prevention, avoidance, detection, and recovery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT- II

Process Synchronization: The Critical Section Problem, Semaphores, And Classical


Problems of Synchronization, Critical Regions, Monitors, Synchronization examples.

Deadlocks: Principles of Deadlocks, System Model, Deadlocks Characterization, Methods


for Handling Deadlocks, Deadlock Prevention, Avoidance, Detection & Recovery from
Deadlocks.
******************

Introduction:
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of
other processes.
 Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative process, because resources
are shared among Cooperative processes.
Process Synchronization means coordinating the execution of processes such that no two
processes access the same shared resources and data.
CRITICAL SECTION PROBLEM:
Consider a system consisting of n processes {p1, p2, p3…pN}. Each process has a segment of
code called a critical section, in which the process may be changing common variables,
updating a file, writing a file, and so on. The important feature of the system is that, when one
process is executing in its critical section, no other process is to be allowed to execute in its
critical section. That is, no two processes are executed in their critical section at the same
time.
`The critical section problem is used to design a set of protocols which can ensure that the
Race condition among the processes will never arise. Each process must request permission
to enter its critical section. The section of code implementing this request is the entry section.
The critical section may be followed by an exit section. The remaining code is the remainder
section.
In the entry section, the process requests for entry in the Critical Section. Any solution to the
critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter in the critical
section next, and the selection cannot be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

PETERSON’S SOLUTION FOR CRITICAL SECTION PROBLEM


Peterson’s solution provides a good algorithmic description of solving the critical-section
problem and illustrates some of the complexities involved in designing software that
addresses the requirements of mutual exclusion, progress, and bounded waiting.

Peterson’s Solution is a classical software-based solution to the critical section problem. In


Peterson’s solution, we have two shared variables:
1. boolean flag[i] - Initialized to FALSE, initially no one is interested in entering the
critical section
2. int turn - The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s solution:
o It involves busy waiting. (In the Peterson’s solution, the code statement-
“while(flag[j] && turn == j);”
is responsible for this. Busy waiting is not favoured because it wastes CPU cycles that could
be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.
*************
SEMAPHORES
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization. Both operations are
atomic and semaphore(S) is always initialized to one. The critical section part of Process is in
between wait and signal operation. The definitions of wait and signal are as follows:

Process Execution in its critical section


// Some Code
Wait(s)
// Critical Section
Signal(s)
// Remainder Section
Types of Semaphores: There are two main types of semaphores i.e. counting semaphores
and binary semaphores. Details about these are given as follows
1. Counting Semaphores: These are integer value semaphores and have an unrestricted
value domain. These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are added, semaphore
count automatically incremented and if the resources are removed, the count is decremented.
2. Binary Semaphores: The binary semaphores are like counting semaphores but their value
is restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.
Advantages of Semaphores: Some of the advantages of semaphores are as follows :
 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor
time is not wasted unnecessarily to check if a condition is fulfilled to allow a process
to access the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.
Disadvantages of Semaphores: Some of the disadvantages of semaphores are as follows:
 Semaphores are complicated so the wait and signal operations must be implemented
in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a
structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.
********************

PRODUCER - CONSUMER PROBLEM


Producer Consumer Problem Statement: We have a buffer of fixed size. A producer can
produce an item and can place in the buffer. A consumer can pick items and can consume
them. We need to ensure that when a producer is placing an item in the buffer, then at the
same time consumer should not consume any item. In this problem, buffer is the critical
section.
To solve this problem, we need two counting semaphores: Full and Empty.
 Full keeps track of number of items in the buffer at any given time and
 Empty keeps track of number of unoccupied slots.
Initialization of semaphores mutex = 1 Full = 0 // Initially, all slots are empty. Thus full
slots are 0 Empty = n // All slots are empty initially
Solution for Producer
Do
{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}
while(true)

When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer
Do
{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}
while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by
1 and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by
1. The value of mutex is also increased so that producer can access the buffer now.
**************
READERS - WRITERS PROBLEM
Readers-Writers Problem Statement: Consider a situation where we have a file shared between
many people.
 If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
 However, if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem, Problem parameters:
 One set of data is shared among a number of processes

 Once a writer is ready, it performs its write. Only one writer may write at a time

 If a process is writing, no other process can read it

 If at least one reader is reading, no other process can write

 Readers may not write and only read

Solution when Reader has the Priority over Writer


Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution
1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when
readcnt is updated i.e. when any reader enters or exit from the critical section and
semaphore wrt is used by both readers and writers

2. int readcnt; // readcnt tells the number of processes performing read in the critical
section, initially 0
Functions for semaphore:
 wait () : decrements the semaphore value.
 signal () : increments the semaphore value.
Writer process:
 Writer requests the entry to critical section.

 If allowed i.e. wait( ) gives a true value, it enters and performs the write. If not allowed, it
keeps on waiting.

 It exits the critical section.

do
{
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
}
while(true);
Reader process:
1. Reader requests the entry to critical section.

2. If allowed:

 it increments the count of number of readers inside the critical section. If this reader
is the first reader entering, it locks the wrt semaphore to restrict the entry of writers if
any reader is inside.

 It then, signals mutex as any other reader is allowed to enter while others are already
reading.
 After performing reading, it exits the critical section. When exiting, it checks if no
more reader is inside, it signals the semaphore “wrt” as now, writer can enter the
critical section.

3. If not allowed, it keeps on waiting.


Do
{
// Reader wants to enter the critical section
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
// there is at least one reader in the critical section
// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);
// other readers can enter while this current reader is inside
// the critical section
signal(mutex);
// current reader performs reading here
wait(mutex); // a reader wants to leave
readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0)
signal(wrt); // writers can enter
signal(mutex); // reader leaves
} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such that
preference is given to readers if writers are also there. Thus, no reader is waiting simply
because a writer has requested to enter the critical section.
***************

Critical Regions:
In an operating system, a critical region refers to a section of code or a data structure
that must be accessed exclusively by one method or thread at a time. Critical regions
are utilized to prevent concurrent entry to shared sources, along with variables,
information structures, or devices, that allow you to maintain information integrity
and keep away from race conditions.
The concept of important regions is carefully tied to the want for synchronization
and mutual exclusion in multi-threaded or multi-manner environments. Without
proper synchronization mechanisms, concurrent admission to shared resources can
lead to information inconsistencies, unpredictable conduct, and mistakes.
To implement mutual exclusion and shield important areas, operating structures
provide synchronization mechanisms, inclusive of locks, semaphores, or monitors.
These mechanisms ensure that the handiest one procedure or thread can get the right
of entry to the vital location at any given time, even as other procedures or threads
are averted from entering till the cutting-edge occupant releases the lock.

Critical Region Characteristics and Requirements


Following are the characteristics and requirements for critical regions in
an operating system.
1. Mutual Exclusion
Only one procedure or thread can access the important region at a time. This ensures
that concurrent entry does not bring about fact’s corruption or inconsistent states.
2. Atomicity
The execution of code within an essential region is dealt with as an indivisible unit
of execution. It way that after a system or thread enters a vital place, it completes its
execution without interruption.
3. Synchronization
Processes or threads waiting to go into a essential vicinity are synchronized to
prevent simultaneous access. They commonly appoint synchronization primitives,
inclusive of locks or semaphores, to govern access and put in force mutual
exclusion.
4. Minimal Time Spent in Critical Regions
It is perfect to reduce the time spent inside crucial regions to reduce the capacity for
contention and improve gadget overall performance. Lengthy execution within
essential regions can increase the waiting time for different strategies or threads.
***********
Monitors in Process Synchronization
Monitors are a higher-level synchronization construct that simplifies process
synchronization by providing a high-level abstraction for data access and
synchronization. Monitors are implemented as programming language constructs,
typically in object-oriented languages, and provide mutual exclusion, condition
variables, and data encapsulation in a single construct.

1. A monitor is essentially a module that encapsulates a shared resource and


provides access to that resource through a set of procedures. The
procedures provided by a monitor ensure that only one process can access
the shared resource at any given time, and that processes waiting for the
resource are suspended until it becomes available.
2. Monitors are used to simplify the implementation of concurrent programs
by providing a higher-level abstraction that hides the details of
synchronization. Monitors provide a structured way of sharing data and
synchronization information, and eliminate the need for complex
synchronization primitives such as semaphores and locks.
3. The key advantage of using monitors for process synchronization is that
they provide a simple, high-level abstraction that can be used to
implement complex concurrent systems. Monitors also ensure that
synchronization is encapsulated within the module, making it easier to
reason about the correctness of the system.
However, monitors have some limitations. For example, they can be less efficient
than lower-level synchronization primitives such as semaphores and locks, as they
may involve additional overhead due to their higher-level abstraction. Additionally,
monitors may not be suitable for all types of synchronization problems, and in some
cases, lower-level primitives may be required for optimal performance.
The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between
processes. For example, Java Synchronized methods. Java provides wait() and
notify() constructs.

1. It is the collection of condition variables and procedures combined


together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal
variable of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Condition Variables: Two different operations are performed on the condition


variables of the monitor.
 Wait.
 Signal.

let say we have 2 condition variables condition x, y; // Declaring variable Wait


operation x.wait() : Process performing wait operation on any condition variable are
suspended. The suspended processes are placed in block queue of that condition
variable. Note: Each condition variable has its unique block queue. Signal
operation x.signal(): When a process performs signal operation on condition
variable, one of the blocked processes is given chance.
If (x block queue empty)

// Ignore signal

else
// Resume a process from block queue.

Advantages of Monitor: Monitors have the advantage of making parallel


programming easier and less error prone than using techniques such as semaphore.
Disadvantages of Monitor: Monitors have to be implemented as part of the
programming language.
*************
DEADLOCKS
Introduction:
A system consists of a finite number of resources to be distributed among a number of
computing processes. Every process needs some resources to complete its execution. Under
the normal mode of operation, a process may utilize a resource in only the following
sequence.
1. Request: The process requests for some resource. OS grant the resource if it is
available otherwise let the process waits.

2. Use: If the OS allocate the resource the process can operate on the resource.

3. Release: The process releases the resource on the completion.


DEAD LOCK CHARACTERRIZATION
Definition: A set of processes in dead locked state when every process in the set is waiting
for a resource that can be held by another process in the set.
In this situation, none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be released.
 Let us assume that there are three processes P1, P2 and P3.
 There are three different resources R1, R2 and R3.
 R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
 After some time, P1 demands for R2 which is being used by P2. P1 halts its execution
since it can't complete without R2. P2 also demands for R3 which is being used by
P3. P2 also stops its execution because it can't continue without R3. P3 also demands
for R1 which is being used by P1 therefore P3 also stops its execution.

Necessary conditions for Deadlocks


A Dead lock situation can arise if the following four conditions hold simultaneously in a
system
1. Mutual Exclusion: A resource can only be shared in mutually exclusive manner. It
implies, if two process cannot use the same resource at the same time.

2. Hold and Wait: A process waits for some resources while holding another resource at the
same time.

3. No pre-emption: The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.

4. Circular Wait: All the processes must be waiting for the resources in a cyclic manner so
that the last process is waiting for the resource which is being held by the first process.
RESOURCE ALLOCATION GRAPH
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes which
are holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. A resource can have more than one instance. Each instance will be
represented by a dot inside the rectangle.
Edges in Resource allocation graph are also of two types, one represents assignment and other
represents the wait of a process for a resource. A resource is shown as assigned to a process if the
tail of the arrow is attached to an instance to the resource and the head is attached to a process. A
process is shown as waiting for a resource if the tail of an arrow is attached to the process while
the head is pointing towards the resource.
METHODS OF HANDLING DEADLOCKS
Deadlock Prevention: This is done by restraining the ways a request can be made. Since
deadlock occurs when all the above four conditions are met, we try to prevent any one of
them, thus preventing a deadlock.
Deadlock Avoidance: When a process requests a resource, the deadlock avoidance algorithm
examines the resource-allocation state. If allocating that resource sends the system into an
unsafe state, the request is got granted.
Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back to
avoid deadlock.
Deadlock Detection and Recovery: We let the system fall into a deadlock and if it happens,
we detect it using a detection algorithm and try to recover.
Some ways of recovery are as follows.
 Aborting all the deadlocked processes.

 Abort one process at a time until the system recovers from the deadlock.

 Resource Pre-emption: Resources are taken one by one from a process and assigned
to higher priority processes until the deadlock is resolved.
Deadlock Ignorance: In the method, the system assumes that deadlock never occurs. Since
the problem of deadlock situation is not frequent, some systems simply ignore it. Operating
systems such as UNIX and Windows follow this approach. However, if a deadlock occurs,
we can reboot our system and the deadlock is resolved automatically.
DEADLOCK PREVENTION
Let us take an example of a chair, as we know that chair always stands on its four legs. If
anyone leg of the chair gets broken, then definitely it will fall. The same is the situation with
the deadlock if we become able to violate any condition among the four and do not let them
occur together then there can be prevented from the deadlock problem.
1. Eliminate Mutual Exclusion: This condition must hold for non-sharable resources. For
example, a printer cannot be simultaneously shared by several processes. In contrast,
Sharable resources do not require mutually exclusive access and thus cannot be involved in a
deadlock. A good example of a sharable resource is Read-only files because if several
processes attempt to open a read-only file at the same time, then they can be granted
simultaneous access to the file.
2. Eliminate Hold and Wait: Hold and wait condition occurs when a process holds a
resource and is also waiting for some other resource in order to complete its execution. Thus,
if we did not want the occurrence of this condition then we must guarantee that when a process
requests a resource, it does not hold any other resource.

There are some protocols that can be used in order to ensure that the Hold and Wait condition
never occurs:
 First protocol; Each process must request and gets all its resources before the
beginning of its execution.

 The second protocol allows a process to request resources only when it does not
occupy any resource.

3. Eliminate No Pre-emption: The third necessary condition for deadlocks is that there
should be no pre-emption of resources that have already been allocated. In order to ensure
that this condition does not hold the following protocols can be used:
 First Protocol: "If a process that is already holding some resources requests another
resource and if the requested resources cannot be allocated to it, then it must release
all the resources currently allocated to it."

 Second Protocol: "When a process requests some resources, if they are available,
then allocate them. If in case the requested resource is not available then we will
check whether it is being used or is allocated to some other process waiting for other
resources. If that resource is not being used, then the operating system pre-empts it
from the waiting process and allocate it to the requesting process. And if that resource
is being used, then the requesting process must wait".

4. Eliminate Circular Wait: The Fourth necessary condition to cause deadlock is circular
wait, in order to ensure violate this condition we can do the following:
Assign a priority number to each resource. There will be a condition that any process cannot
request for a lesser priority resource. This method ensures that not a single process can
request a resource that is being utilized by any other process and due to which no cycle will
be formed.
Example: Assume that R5 resource is allocated to P1, if next time P1 asks for R4, R3 that are
lesser than R5; then such request will not be granted. Only the request for resources that are
more than R5 will be granted.
******************

DEADLOCK AVOIDANCE
The deadlock Avoidance method is used by the operating system in order to check whether
the system is in a safe state or in an unsafe state and in order to avoid the deadlocks, the
process must need to tell the operating system about the maximum number of resources a
process can request in order to complete its execution.
In this method, the request for any resource will be granted only if the resulting state of the
system doesn't cause any deadlock in the system. This method checks every step performed
by the operating system. Any process continues its execution until the system is in a safe
state. Once the system enters into an unsafe state, the operating system has to take a step
back.
With the help of a deadlock-avoidance algorithm, you can dynamically assess the resource-
allocation state so that there can never be a circular-wait situation.
According to the simplest and useful approach, any process should declare the maximum
number of resources of each type it will need. The algorithms of deadlock avoidance mainly
examine the resource allocations so that there can never be an occurrence of circular wait
conditions.
Deadlock avoidance can mainly be done with the help of Banker's Algorithm.
Safe State and Unsafe State
A state is safe if the system can allocate resources to each process (up to its maximum
requirement) in some order and still avoid a deadlock. Formally, a system is in a safe state
only, if there exists a safe sequence. So, a safe state is not a deadlocked state and conversely a
deadlocked state is an unsafe state.
In an Unsafe state, the operating system cannot prevent processes from requesting resources
in such a way that any deadlock occurs. It is not necessary that all unsafe states are
deadlocks; an unsafe state may lead to a deadlock.
Deadlock Avoidance Example
Let us consider a system having 12 magnetic tapes and three processes P1, P2, P3. Process P1
requires 10 magnetic tapes, process P2 may need as many as 4 tapes, process P3 may need up
to 9 tapes. Suppose at a time to, process P1 is holding 5 tapes, process P2 is holding 2 tapes
and process P3 is holding 2 tapes. (There are 3 free magnetic tapes)

So at time t0, the system is in a safe state. The sequence is <P2, P1, P3> satisfies the safety
condition. Process P2 can immediately be allocated all its tape drives and then return them.
After the return the system will have 5 available tapes, then process P1 can get all its tapes
and return them (the system will then have 10 tapes); finally, process P3 can get all its tapes
and return them (The system will then have 12 available tapes).
A system can go from a safe state to an unsafe state. Suppose at time t1, process P3 requests
and is allocated one more tape. The system is no longer in a safe state. At this point, only
process P2 can be allocated all its tapes. When it returns them the system will then have only
4 available tapes. Since P1 is allocated five tapes but has a maximum of ten so it may request
5 more tapes. If it does so, it will have to wait because they are unavailable. Similarly,
process P3 may request its additional 6 tapes and have to wait which then results in a
deadlock.
The mistake was granting the request from P3 for one more tape. If we made P3 wait until
either of the other processes had finished and released its resources, then we could have
avoided the deadlock
Note: In a case, if the system is unable to fulfill the request of all processes then the state of
the system is called unsafe. The banker’s algorithm is a resource allocation and deadlock
avoidance algorithm that tests for safety by simulating the allocation for predetermined
maximum possible amounts of all resources, then makes an “s-state” check to test for
possible activities, before deciding whether allocation should be allowed to continue.
**********

BANKER'S ALGORITHM
Banker's algorithm is a deadlock avoidance algorithm. It is named so because this
algorithm is used in banking systems to determine whether a loan can be granted or not.
Consider there are N account holders in a bank and the sum of the money in all of their
accounts is S. Every time a loan has to be granted by the bank, it subtracts the loan amount
from the total money the bank has. Then it checks if that difference is greater than S. It is
done because, only then, the bank would have enough money even if all the N account
holders draw all their money at once.
The characteristics of Banker's algorithm are as follows:
 If any process requests for a resource, then it has to wait.

 This algorithm consists of advanced features for maximum resource allocation.

 There are limited resources in the system we have.


 In this algorithm, if any process gets all the needed resources, then it is that it should
return the resources in a restricted period.
 Various resources are maintained in this algorithm that can fulfil the needs of at least
one client.
Banker’s algorithm comprises of two algorithms:
1. Safety algorithm: A safety algorithm used to check whether or not a system is in a safe
state or follows the safe sequence in a banker's algorithm:

2. Resource request algorithm: A resource request algorithm checks how a system will
behave when a process makes each type of resource request in a system as a request
matrix.
EXAMPLE FOR BANKER’S ALGORITHM
Let us consider the following snapshot for understanding the banker's algorithm:
1. Calculate the content of the need matrix?
2. Check if the system is in a safe state?
3. Determine the total sum of each type of resource?
Solution: Calculate
1. The Content of the need matrix can be calculated by using the formula given below:
Need = Max – Allocation

2. Let us now check for the safe state.


Safe sequence:
1. For process P0, Need = (3, 2, 1) and Available = (2, 1, 0)
Need <=Available = False So, the system will move to the next process.
2. For Process P1, Need = (1, 1, 0) and Available = (2, 1, 0)
Need <= Available = True So Request of P1 is granted.
Available = Available +Allocation
= (2, 1, 0) + (2, 1, 2) = (4, 2, 2) (New Available)
3. For Process P2, Need = (5, 0, 1) and Available = (4, 2, 2)
Need <=Available = False So, the system will move to the next process.
4. For Process P3, Need = (7, 3, 3) and Available = (4, 2, 2)
Need <=Available = False So, the system will move to the next process.
5. For Process P4, Need = (0, 0, 0) and Available = (4, 2, 2)
Need <= Available = True So Request of P4 is granted.
Available = Available + Allocation
= (4, 2, 2) + (1, 1, 2) = (5, 3, 4) now, (New Available)
6. Now again check for Process P2, Need = (5, 0, 1) and Available = (5, 3, 4)
Need <= Available = True SoRequest of P2 is granted.
Available = Available + Allocation
= (5, 3, 4) + (4, 0, 1) = (9, 3, 5) now, (New Available)
7. Now again check for Process P3, Need = (7, 3, 3) and Available = (9, 3, 5)
Need <=Available = True So The request for P3 is granted.
Available = Available +Allocation = (9, 3, 5) + (0, 2, 0) = (9, 5, 5)
8. Now again check for Process P0, = Need (3, 2, 1) and Available = (9, 5, 5)
Need <= Available = True So, the request will be granted to P0.
Safe sequence: < P1, P4, P2, P3, P0>
 The system allocates all the needed resources to each process.
 So, we can say that the system is in a safe state.

4. The total amount of resources will be calculated by the following formula:


The total amount of resources= sum of columns of allocation + Available
= [8 5 7] + [2 1 0] = [10 6 7]
*********************

DEADLOCK DETECTION AND RECOVERY


The OS periodically checks the system for any deadlock. In case, it finds any of the deadlock
then the OS will recover the system using some recovery techniques. The OS can detect the
deadlocks with the help of Resource allocation graph.
DEADLOCK DETECTION:
1. If resources have a single instance: In single instanced resource types, if a cycle is
being formed in the system, then there will definitely be a deadlock.
In the bellow diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources: In multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the
system by converting the resource allocation graph into the allocation matrix and
request matrix.

DEADLOCK RECOVERY:
In order to recover the system from deadlocks, either OS considers resources or processes.
 Killing the process:
 Killing all the processes involved in the deadlock.
 Killing process one by one.
After killing each process check for deadlock again keep repeating the process till the system
recovers from deadlock.
 Resource Pre-emption
Resources are pre-empted from the processes involved in the deadlock; pre-empted
resources are allocated to other processes so that there is a possibility of recovering the
system from deadlock. In this case, the system goes into starvation.

You might also like