Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views30 pages

Chapter 4 OSY Notes

Chapter 4 discusses CPU scheduling and algorithms, covering scheduling types, objectives, and various algorithms such as FCFS, SJF, and Priority Scheduling. It explains the importance of scheduling in maximizing CPU utilization and minimizing waiting time, as well as the differences between preemptive and non-preemptive scheduling. Additionally, it addresses deadlock conditions and handling methods.

Uploaded by

vs4163571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views30 pages

Chapter 4 OSY Notes

Chapter 4 discusses CPU scheduling and algorithms, covering scheduling types, objectives, and various algorithms such as FCFS, SJF, and Priority Scheduling. It explains the importance of scheduling in maximizing CPU utilization and minimizing waiting time, as well as the differences between preemptive and non-preemptive scheduling. Additionally, it addresses deadlock conditions and handling methods.

Uploaded by

vs4163571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Chapter 4: CPU Scheduling and Algorithms

4.1 Scheduling types: Scheduling objectives,


CPU and I/O Burst Cycle,
Preemptive, Non-preemptive scheduling,
Scheduling criteria
4.2 Types of Scheduling algorithms: FCFS, SJF, SRTN, RR, Priority Queue,
Multilevel queue scheduling

4.3 Deadlock- System models, Necessary conditions leading to deadlock,


Deadlock Handling-prevention, avoidance
---------------------------------------------------------------------------------------------------------------
4.1 Scheduling types:

Why do we need scheduling?

• In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The idea is relatively simple. A process is executed until it
must wait, typically for the completion of some I/O request.
• In a simple computer system, the CPU then just sits idle. All this waiting time is wasted;
no useful work is accomplished. With multiprogramming, we try to use this time
productively. Several processes are kept in memory at one time.
• When one process has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process. This pattern continues. Every time one
process has to wait, another process can take over use of the CPU.
• Scheduling of this kind is a fundamental operating-system function. Almost all
computer resources are scheduled before use. The CPU is, of course, one of the primary
computer resources. Thus, its scheduling is central to operating-system design.

1|Page Sushama Pawar


Objectives of Process Scheduling Algorithm
1. Max CPU utilization [Keep CPU as busy as possible]
2. Fair allocation of CPU.
3. Max throughput [Number of processes that complete their execution per time unit]
4. Min turnaround time [Time taken by a process to finish execution]
5. Min waiting time [Time a process waits in ready queue]
6. Min response time [Time when a process produces first response]

CPU-I/O Burst Cycle

The success of CPU scheduling depends on an observed property of processes:


process execution consists of a cycle of CPU execution and I/0 wait.

CPU Burst : It is the amount of time required by a process or can be said the amount of time
required by the process to finish. We cannot estimate the time taken by the process before
running it. So most of the problem is related to the burst time.

BurstTime= Turnaround Time (Completion Time)-Waiting Time

2|Page Sushama Pawar


I/O Burst : While the process is in the running state, it may ask for i/o , thus the process goes
to the block or wait state, where the i/o will be processed and then it will be sent back to the
ready state.

Processes alternate between these two states. Process execution begins with a CPU burst. That
is followed by an I/O burst, which is followed by another CPU burst, then another I/0 burst,
and so on. Eventually, the final CPU burst ends with a system request to terminate execution.

_----------------------------------------------------------------------------------------------_----------
Preemptive Scheduling

Preemptive Scheduling means once a process started its execution, the currently running
process can be paused for a short period of time to handle some other process of higher priority,
it means we can preempt the control of CPU from one process to another if required.

A computer system implementing this supports multi-tasking as it gives the user impression
that the user can work on multiple processes.

It is practical because if some process of higher priority is encountered then the current process
can be paused to handle that process.

Examples: - SRTF, Priority, Round Robin, etc.

_--------------------------------------------------------------------------------------------------------------

Non-Preemptive Scheduling

Non-Preemptive Scheduling means once a process starts its execution or the CPU is processing
a specific process it cannot be halted or in other words we cannot preempt (take control) the
CPU to some other process.

A computer system implementing this cannot support the execution of process in a multi task
fashion. It executes all the processes in a sequential manner.

It is not practical as all processes are not of same priority and are not always known to the
system in advance.

3|Page Sushama Pawar


Examples:- FCFS, SJF, Priority, etc.

Difference between Preemptive and Non-Preemptive Scheduling in OS

Preemptive Scheduling Non-Preemptive Scheduling

Processor can be preempted to execute a Once Processor starts to execute a process it


different process in the middle of execution of must finish it before executing the other. It
any current process. cannot be paused in middle.

CPU utilization is more compared to Non- CPU utilization is less compared to Preemptive
Preemptive Scheduling. Scheduling.

Waiting time and Response time is less. Waiting time and Response time is more.

The preemptive scheduling is prioritized. The When a process enters the state of running, the
highest priority process should always be the state of that process is not deleted from the
process that is currently utilized. scheduler until it finishes its service time.

If a high priority process frequently arrives in If a process with long burst time is running
the ready queue, low priority process may CPU, then another process with less CPU burst
starve. time may starve.

Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.

Ex:- SRTF, Priority, Round Robin, etc. Ex:- FCFS, SJF, Priority, etc.

_---------------------------------------------------------------------------------------------------------------

Scheduling criteria:
Different CPU-scheduling algorithms have different properties, and the choiceof a particular
algorithm may favor one class of processes over another. Inchoosing which algorithm to use in
a particular situation, we must considerthe properties of the various algorithms.
Many criteria have been suggested for comparing CPU-scheduling algorithms.
The criteria include thefollowing:

4|Page Sushama Pawar


CPU utilization:
• We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range
from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily used system).

Throughput:
• If the CPU is busy executing processes, then work is being done. One measure of work
is the number of processes that are completed per time unit, called throughput.
• For long processes, this rate may be one process per hour; for short transactions, it may
be ten processes per second.

Turnaround time:
• From the point of view of a particular process, the important criterion is how long it
takes to execute that process.
• The interval from the time of submission of a process to the time of completion is the
turnaround time.
• Turnaround time is the sum of the periods spent waiting to get into memory, waiting in
the ready queue, executing on the CPU, and doing I/0.

Waiting time:
• The CPU-scheduling algorithm does not affect the amount of time during which a
process executes or does I/0; it affects only the amount of time that a process spends
waiting in the ready queue.
• Waiting time is the sum of the periods spent waiting in the ready queue.
Response time:
• In an interactive system, turnaround time may not be the best criterion.
• Often, a process can produce some output fairly early and can continue computing new
results while previous results are being output to the user.
• Thus, another measure is the time from the submission of a request until the first
response is produced.
• This measure, called response time, is the time it takes to start responding, not the time
it takes to output the response.
• The turnaround time is generally limited by the speed of the output device.

5|Page Sushama Pawar


4.2 Types of scheduling algorithms:

FCFS Scheduling-

In FCFS Scheduling,
• The process which arrives first in the ready queue is firstly assigned the CPU.
• In case of a tie, process with smaller process id is executed first.
• The implementation of the FCFS policy is easily managed with a FIFO queue.
• When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue.

Advantages-

• It is simple and easy to understand.


• It can be easily implemented using queue data structure.
• It does not lead to starvation.

Disadvantages-

• It does not consider the priority or burst time of the processes.


• It suffers from convoy effect.

Convoy Effect
In convoy effect,
• Consider processes with higher burst time arrived before the processes with
smaller burst time.
• Then, smaller processes have to wait for a long time for longer processes to release
the CPU.

Example:

6|Page Sushama Pawar


Consider the above set of processes that arrive at time zero. The length of the CPU burst
time given in millisecond.
Now we calculate the average waiting time, average turnaround time and throughput.

Average Turnaround Time

First of all, we have to calculate the turnaround time of each process.

Turnaround Time = Completion Time – Arrival Time

Turnaround time of
P1 = 5 - 0 = 5ms
P2 = 29 - 0 = 29ms
P3 = 45 - 0 = 45ms
P4 = 55 - 0 = 55ms
P5 = 58 - 0 = 58ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)

Total Turnaround Time = (5 + 29 + 45 + 55 + 58)ms = 192ms

Therefore, Average Turnaround Time = (192 / 5)ms = 38.4ms

7|Page Sushama Pawar


Average Waiting Time

First of all, we have to calculate the waiting time of each process.

Waiting Time = Turnaround Time – Burst Time

Waiting time of
P1 = 5 – 5 = 0 ms
P2 = 26 - 24 = 5 ms
P3 = 45 – 16 = 29 ms
P4 = 55 - 10 = 45 ms
P5 = 58 - 3 = 55 ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process

Therefore, average waiting time = (0 + 5 + 29 + 45 + 55) / 5 = 25 ms

Throughput

Here, we have a total of five processes. Process P1, P2, P3, P4, and P5 takes 5ms, 24ms, 16ms,
10ms, and 3ms to execute respectively.

Throughput = (5 + 24 +16 + 10 +3) / 5 = 11.6ms

It means one process executes in every 11.6 ms.

_---------------------------------------------------------------------------------------------------------------

SJF Scheduling-

In SJF Scheduling,
• Out of all the available processes, CPU is assigned to the process having smallest burst
time.
• In case of a tie, it is broken by FCFS Scheduling.

8|Page Sushama Pawar


SJF Scheduling can be used in both preemptive and non-preemptive mode.
• Preemptive mode of Shortest Job First is called as Shortest Remaining Time First
(SRTF).

Advantages-

• SRTF is optimal and guarantees the minimum average waiting time.


• It provides a standard for other algorithms since no other algorithm performs better than
it.

Disadvantages-

• It cannot be implemented practically since burst time of the processes cannot be known
in advance.
• It leads to starvation for processes with larger burst time.
• Priorities cannot be set for the processes.
• Processes with larger burst time have poor response time.
Example:
SJF (Shortest Job First) Scheduling

9|Page Sushama Pawar


Average Turnaround Time

First of all, we have to calculate the turnaround time of each process.

Turnaround Time = Completion Time – Arrival Time

Turnaround Time of

P1 = 8 - 0 = 8ms
P2 = 58 - 0 = 58ms
P3 = 34 - 0 = 34ms
P4 = 18 - 0 = 18ms
P5 = 3 - 0 = 3ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)

Therefore, Average Turnaround Time = (8 + 58 + 34 + 18 + 3) / 5 = 24.2ms

Average Waiting Time

Waiting Time = Turnaround Time – Burst Time

Waiting Time for


P1 = 8 - 5 = 3ms
P2 = 58 - 24 = 34ms
P3 = 34 – 16 = 18ms
P4 = 18 - 10 = 8ms
P5 = 3 – 3 = 0ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Therefore, Average Waiting Time = (3 + 34 + 18 + 8 + 0) / 5 = 12.6ms

10 | P a g e Sushama Pawar
Throughput
Here, we have a total of five processes. Process P1, P2, P3, P4, and P5 takes 5ms, 24ms,
16ms, 10ms, and 3ms to execute respectively. Therefore, Throughput will be same as
above problem i.e., 11.6ms for each process.
_---------------------------------------------------------------------------------------------------------------

Shortest Remaining Time First(SRTF)

Average Turnaround Time

First of all, we have to calculate the turnaround time of each process.

Turnaround Time = Completion Time – Arrival Time

Turnaround Time of

P1 = 3 - 0 = 3ms
P2 = 15 - 2 = 13ms
P3 = 8 - 4 = 4ms
P4 = 20 - 6 = 14ms
P5 = 10 - 8 = 2ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)

Therefore, Average Turnaround Time = (3 + 13 + 4 + 14 + 2) / 5 = 7.2ms

11 | P a g e Sushama Pawar
Average Waiting Time

Waiting Time = Turnaround Time – Burst Time

Waiting Time for


P1 = 3 - 3 = 0ms
P2 = 13 - 6 = 7ms
P3 = 4 – 4 = 0ms
P4 = 14 - 5 = 9ms
P5 = 2 – 2 = 0ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Therefore, Average Waiting Time = (0 + 7 + 0 + 9 + 0) / 5 = 3.2ms
Throughput

Troughput = (3 + 6 + 4 + 5 + 2) / 5 = 4ms

Therefore, each process takes 4ms to execute.

_--------------------------------------------------------------------------------------------------------------

Priority Scheduling-
In Priority Scheduling,
• Out of all the available processes, CPU is assigned to the process having the highest
priority.
• In case of a tie, it is broken by FCFS Scheduling.

• Priority Scheduling can be used in both preemptive and non-preemptive mode.

Advantages-

12 | P a g e Sushama Pawar
• It considers the priority of the processes and allows the important processes to run first.
• Priority scheduling in preemptive mode is best suited for real time operating system.

Disadvantages-

• Processes with lesser priority may starve for CPU.


• There is no idea of response time and waiting time.

Important Notes-

Note-01:

• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.

Note-02:

Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available

Priority Scheduling Example (Non Preemptive Mode)

13 | P a g e Sushama Pawar
Average Turnaround Time

First of all, we have to calculate the turnaround time of each process.

+ Turnaround Time = Completion Time – Arrival Time

Turnaround Time of

P1 = 9 - 0 = 9ms
P2 = 25 - 0 = 25ms
P3 = 26- 0 = 26ms
P4 = 3 - 0 = 3ms
P5 = 13 - 0 = 13ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)

Therefore, Average Turnaround Time = (9 + 25 + 26 + 3 13) / 5 = 15.2ms

Average Waiting Time

Waiting Time = Turnaround Time – Burst Time

Waiting Time for


P1 = 9 - 6 = 3ms
P2 = 25 - 12 = 15ms
P3 = 26– 1 = 25ms
P4 = 3 - 3 = 0ms
P5 = 13 – 4 = 9ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process

14 | P a g e Sushama Pawar
Therefore, Average Turnaround Time = (3 + 15 + 25 + 0 + 9) / 5 = 10ms

Throughput

Throughput = (6 + 12 + 1 + 3 + 4) / 5 = 5.2ms

Therefore, each process takes 5.2ms to execute.

_----------------------------------------------------------------------------------------------------------------------------- ------

Priority Scheduling Example (Preemptive Mode)

Process Priority Arrival Burst


Time Time
P1 10 0 5
P2 20 1 4
P3 30 2 2
P4 40 4 1

Gantt Chart
P1 P2 P3 P3 P4 P2 P1
0 1 2 3 4 5 8 12

Turnaround Time = Completion Time – Arrival Time

Turnaround Time of

P1 = 12 - 0 = 12ms
P2 = 8 - 1 = 7ms
P3 = 4- 2 = 2ms
P4 = 5 - 4 = 1ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)


Therefore, Average Turnaround Time = (12 + 7 + 2 + 1) / 4 = 5.5 ms

Average Waiting Time

15 | P a g e Sushama Pawar
Waiting Time = Turnaround Time – Burst Time

Waiting Time for


P1 = 12 - 5 = 7ms
P2 = 7 - 4 = 3ms
P3 = 2– 2 = 0ms
P4 = 1 - 1 = 0ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process

Therefore, Average Turnaround Time = (7 + 3 + 0 + 0) / 4 = 2.5ms

Throughput

Throughput = (5+ 4+ 2+ 1) / 4 = 3ms

Therefore, each process takes 3ms to execute.

_--------------------------------------------------------------------------------------------------------------
Round Robin Scheduling-
In Round Robin Scheduling,
• CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
• This fixed amount of time is called as time quantum or time slice.
• After the time quantum expires, the running process is preempted and sent to the ready
queue.
• Then, the processor is assigned to the next arrived process.
• It is always preemptive in nature.

Round Robin Scheduling is FCFS Scheduling with preemptive mode.

Advantages-
• It gives the best performance in terms of average response time.

• It is best suited for time sharing system, client server architecture and interactive
system.
Disadvantages-

16 | P a g e Sushama Pawar
• It leads to starvation for processes with larger burst time as they have to repeat the cycle
many times.
• Its performance heavily depends on time quantum.
• Priorities cannot be set for the processes.

Important Notes-

Note-01:

With decreasing value of time quantum,

• Number of context switch increases


• Response time decreases
• Chances of starvation decreases

Thus, smaller value of time quantum is better in terms of response time.

Note-02:

With increasing value of time quantum,

• Number of context switch decreases


• Response time increases
• Chances of starvation increases

Thus, higher value of time quantum is better in terms of number of context switch.

Note-03:

• With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
• When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.

Note-04:

17 | P a g e Sushama Pawar
• The performance of Round Robin scheduling heavily depends on the value of time
quantum.
• The value of time quantum should be such that it is neither too big nor too small.

Round Robin Scheduling Example

Here is the Round Robin scheduling example with gantt chart. Time Quantum is 5ms.

Turnaround Time = Completion Time – Arrival Time

Turnaround Time of

P1 = 44 - 0 = 44ms
P2 = 21 - 0 = 21ms
P3 = 24- 0 = 24ms

Average Turnaround Time = (Total Turnaround Time / Total Number of Process)

Therefore, Average Turnaround Time = (44 + 21 + 24 ) / 3 = 29.6ms

Average Waiting Time

Waiting Time = Turnaround Time – Burst Time

18 | P a g e Sushama Pawar
Waiting Time for
P1 = 44 - 30 = 14ms
P2 = 21 - 6 = 15ms
P3 = 24– 8 = 16ms

Average Waiting Time = Waiting Time of all Processes / Total Number of Process

Therefore, Average Turnaround Time = (14 + 15 + 16) / 3 = 15ms

Throughput

Throughput = (30 + 6 + 8) / 3 = 14.66ms

Therefore, each process takes 14.66 ms to execute.

_---------------------------------------------------------------------------------------------------------------

Multiple Queue Scheduling:

Another class of scheduling algorithms has been created for situations in which processes are
easily classified into different groups. For example, a common division is made between
foreground (interactive) processes and background (batch) processes. These two types of
processes have different response-time requirements and so may have different scheduling
needs. In addition, foreground processes may have priority (externally defined) over
background processes.

A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues (refer Figure). The processes are permanently assigned to one queue, generally based
on some property of the process, such as memory size, process priority, or process type. Each
queue has its own scheduling algorithm. For example, separate queues might be used for
foreground and background processes. The foreground queue might be scheduled by an RR
algorithm, while the background queue is scheduled by an FCFS algorithm.

19 | P a g e Sushama Pawar
In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling. For example, the foreground queue may have absolute
priority over the background queue.

Let's look at an example of a multilevel queue scheduling algorithm with five queues, listed
below in order of priority:

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the ready
queue while a batch process was running, the batch process would be preempted.

4.3 Deadlock:

20 | P a g e Sushama Pawar
A process requests resources; if the resources are not available at that time, the process enters
a waiting state. Sometimes, a waiting process is never again able to change state, because the
resources it has requested are held by other waiting processes. This situation is called a
deadlock.

System Model:
A process must request a resource before using it and must release the resource after using it.
A process may request as many resources as it requires carrying out its designated task.
Obviously, the number of resources requested may not exceed the total number of resources
available in the system. In other words, a process cannot request three printers if the system
has only two.

Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
Request: The process requests the resource. If the request cannot be granted immediately (for
example, if the resource is being used by another process), then the requesting process must
wait until it can acquire the resource.

Use: The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).

Release: The process releases the resource.

Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneouslyin a system:
Mutual exclusion: At least one resource must be held in a non-sharablemode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayeduntil the resource has been released.

Hold and wait: A process must be holding at least one resource andwaiting to acquire additional
resources that are currently being held byother processes.

21 | P a g e Sushama Pawar
No preemption: Resources cannot be preempted; that is, a resource canbe released only
voluntarily by the process holding it, after that processhas completed its task.

Circular wait: A set { P0 , Pl, ... , P11 } of waiting processes must exist suchthat Po is waiting
for a resource held by P1, P1 is waiting for a resourceheld by P2, ... , Pn-1 is waiting for a
resource held by P,v and P11 is waitingfor a resource held by Po.

Methods for handling deadlocks:

To ensure that deadlocks never occur, the system can use either a deadlock-prevention or a
deadlock-avoidance scheme. Deadlock prevention provides a set of methods for ensuring that
at least one of the necessary conditions cannot hold. These methods prevent deadlocks by
constraining how requests for resources can be made.

Deadlock avoidance requires that the operating system be given in advance additional
information concerning which resources a process will request and use during its lifetime. With
this additional knowledge, it can decide for each request whether or not the process should
wait. To decide whether the current request can be satisfied or must be delayed, the system
must consider the resources currently available, the resources currently allocated to each
process, and the future requests and releases of each process.

Deadlock Prevention:
1. Eliminate Mutual Exclusion:

The mutual-exclusion condition must hold for non-sharable resources. Forexample, a


printer cannot be simultaneously shared by several processes.Sharable resources, in
contrast, do not require mutually exclusive access andthus cannot be involved in a
deadlock. Read-only files are a good example ofa sharable resource. If several
processes attempt to open a read-only file at thesame time, they can be granted
simultaneous access to the file. A process neverneeds to wait for a sharable resource.
In general, however, we cannot preventdeadlocks by denying the mutual-exclusion
condition, because some resourcesare intrinsically non-sharable.
22 | P a g e Sushama Pawar
2. Eliminate Hold and Wait
a) Allocate all required resources to the process before the start of its execution, this
way hold and wait condition is eliminated but it will lead to low device utilization.
for example, if a process requires printer at a later time and we have allocated printer
before the start of its execution printer will remain blocked till it has completed its
execution.

b) The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.

3. Eliminate No Preemption:
a) One protocol is ‘If a process that is holding some resources requests another
resources and that resources cannot be allocated to it, then it must release all resources
that are currently allocated to it.
b) Another protocol is "When a process requests some resources, if they are available,
allocate them. If a resource it requested is not available, then we check whether it is
being used or it is allocated to some other process waiting for other resources. If that
resource is not being used, then the OS preempts it from the waiting process and
allocate it to the requesting process. If that resource is used, the requesting process must
wait." This protocol can be applied to resources whose states can easily be saved and
restored (registers, memory space). It cannot be applied to resources like printers.

23 | P a g e Sushama Pawar
4. Circular Wait:

To avoid circular wait, resources may be ordered and we can ensure that each process
can request resources only in an increasing order of these numbers. The algorithm may
itself increase complexity and may also lead to poor resource utilization.
For example, set priorities for rl = 1, r2 = 2, r3 = 3, and r4 = 4. With these priorities, if
process P wants to use rl and r3, it should first request rl, then T3. Another protocol is
"Whenever a process requests a resource rj, it must have released all resources rk with
priority (rk) z priority (rj).

Deadlock avoidance

In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.

The simplest and most useful approach states that the process should declare the maximum
number of resources of each type it may ever need. The Deadlock avoidance algorithm
examines the resource allocations so that there can never be a circular wait condition.

Banker’s Algorithm

The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.

Following Data structures are used to implement the Banker’s Algorithm:

Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.

Available :

• It is a 1-d array of size ‘m’ indicating the number of available resources of each type.

24 | P a g e Sushama Pawar
• Available[ j ] = k means there are ‘k’ instances of resource type Rj

Max :

• It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
• Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.

Allocation :

• It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
• Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource
type Rj

Need :

• It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each
process.
• Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
for its execution.
• Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocation specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.

Banker’s algorithm consists of Safety algorithm and Resource request algorithm.

Safety Algorithm

The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n

25 | P a g e Sushama Pawar
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)

3) Work = Work + Allocation[i]


Finish[i] = true
goto step (2)

4) if Finish [i] = true for all i


then the system is in a safe state

Resource-Request Algorithm

Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the
following actions are taken:

1) If Requesti<= Needi
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its
maximum claim.

2) If Requesti<= Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.

3) Have the system pretend to have allocated the requested resources to process Pi by
modifying the state asfollows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

Examples:

26 | P a g e Sushama Pawar
27 | P a g e Sushama Pawar
28 | P a g e Sushama Pawar
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of a directed graph calleda Resource-
Allocation Graph. This graph consists of a set of vertices Vand a set of edges E. The set of
vertices Vis partitioned into two different typesof nodes: P == { P1, P2, ... , Pn}, the set
consisting of all the active processes in thesystem, and R == {R1, R2, ... , RmL the set
consisting of all resource types in thesystem.

A directed edge Pi→ Rj is called a request edge;


a directed edge Ri→ P. is called an assignment edge.

The resource-allocation graph shown in Figure 7.2 depicts the following situation.
The sets P, K and E:
P = {P1, P2, P3}
R= {R1, R2, R3,R4}
E = {P1-> R1, P2-> R3, R1->P2, R2-> P2, R2-> P1, R3-> P3}

Resource instances:

• One instance of resource type R1


• Two instances of resource type R2
• One instance of resource type R3
• Three instances of resource type
R4

Given the definition of a resource-allocation graph, it can be shown that, if the graph contains
no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then
a deadlock may exist.

29 | P a g e Sushama Pawar
P1 → R 1 →P2 → R3 →P3 → R2 →
P1

P2 → R3 → P3 →R2 →P2

30 | P a g e Sushama Pawar

You might also like