Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
26 views36 pages

CPU Scheduling

CPU scheduling is a critical operating system function that determines which process uses the CPU, aiming to maximize utilization, fairness, and performance metrics like turnaround and waiting time. It includes long-term, medium-term, and short-term scheduling, with various algorithms such as First Come First Serve, Shortest Job First, Round Robin, and Priority Scheduling, each with distinct advantages and disadvantages. Effective scheduling ensures efficient resource use and responsiveness in multiprogramming environments.

Uploaded by

gargdilkash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views36 pages

CPU Scheduling

CPU scheduling is a critical operating system function that determines which process uses the CPU, aiming to maximize utilization, fairness, and performance metrics like turnaround and waiting time. It includes long-term, medium-term, and short-term scheduling, with various algorithms such as First Come First Serve, Shortest Job First, Round Robin, and Priority Scheduling, each with distinct advantages and disadvantages. Effective scheduling ensures efficient resource use and responsiveness in multiprogramming environments.

Uploaded by

gargdilkash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

https://www.youtube.com/watch?

v=cjZAhWWbQg4

https://www.youtube.com/watch?v=jzk5Xoi6dSQ
https://www.youtube.com/watch?v=BTa7WA9zPXY
https://www.youtube.com/watch?v=iWs8tgYzgA4
https://www.youtube.com/watch?v=2kbOgVezfWg
https://www.youtube.com/watch?v=6PEyXwdxeIc
https://www.youtube.com/watch?v=RCmhGeuj0GA
https://www.youtube.com/watch?v=M1ShhbWqmwQ
https://www.youtube.com/watch?v=7Lhua2z5l18

CPU Scheduling: Concepts and Algorithms

CPU scheduling is a key function of the operating system that


determines which process gets to use the CPU when multiple
processes are ready to execute. The goal is to maximize CPU
utilization, ensure fairness, and optimize performance metrics like
turnaround time, waiting time, and response time.

Basic Concepts of CPU Scheduling

CPU scheduling occurs when processes are in the Ready


Queue and the CPU must decide which process to execute next. It
is essential in multiprogramming systems to ensure efficient CPU
usage.

 Fairness: Everyone gets a turn. No one process should hog


all the resources while others starve.
 Throughput: Get as much work done as possible. The system
should be efficient and complete many tasks quickly.
 Responsiveness: Make sure the computer feels snappy.
Users shouldn't have to wait too long for things to happen,
especially for interactive tasks.
 Predictability: Things should happen consistently. Users
should have a general idea of how long tasks will take. No
sudden, unexpected delays.
 Balanced Resource Use: Don't let any one part of the
computer (CPU, memory, etc.) get overloaded while others sit
idle. Use everything efficiently.
 No Starvation: No process should be completely ignored.
Even long-running tasks should eventually get their chance to
run.
 Priorities: Some tasks are more important than others. The
scheduler should take these priorities into account. For
example, a critical system process should get preference over
a background task.
 Resource Holders: If a process is holding onto something
important (like a file or a network connection), give it a little
extra attention so it can finish and release those resources.
 Good Behavior: Reward processes that use resources
efficiently and don't cause problems. For example, processes
that don't constantly demand the CPU might get slightly
better treatment.

In short, a good scheduler keeps the computer running smoothly,


efficiently, and fairly, so everyone (and every program) gets a
reasonable share of the resources and the system feels responsive
and predictable.

Types of Scheduling
Long-term scheduling: Decides when to start a new process.
It controls how many processes are running at a time.
 Determines which programs are admitted to the system for
processing
 Controls the degree of multiprogramming
 The more processes that are created, the smaller the
percentage of time that each process can be executed
 May limit to provide satisfactory service to the current set of
processes

Medium-term scheduling: Decides which process should stay


in memory. If too many processes are running, some may be
temporarily removed and brought back later.
 Part of the swapping function
 Swapping-in decisions are based on the need to manage the
degree of multiprogramming
 Considers the memory requirements of the swapped-out
processes
 Short-term scheduling: Decides which process runs next.
This happens frequently and is managed by the CPU scheduler
(dispatcher).
 Known as the dispatcher
 Executes most frequently (reason to call it short-term
scheduling)
 Select from among the processes in ready queue, and
allocate the CPU to one of them.
 Makes the fine-grained decision of which process to execute
next
 Invoked when an event occurs that may lead to the blocking
of the current process or that may provide an opportunity to
preempt a currently running process in favor of another

Process State Transition Diagram with Suspend States


Scheduling and Process State Transitions

Levels of Scheduling
Queuing Diagram for Scheduling

Difference Between Short-Term, Medium Term, And


Long-Term Schedulers

CPU and I/O Burst Cycle:

Imagine a program running on your computer. It's not constantly


using the CPU. Instead, it goes through a cycle:

1. CPU Burst: The program uses the CPU to do some


calculations or processing. Think of it as the program's
"thinking time."
2. I/O Burst: The program needs to wait for something
outside the CPU, like reading data from a hard drive, sending
information over the network, or waiting for you to type
something. This is the program's "waiting time."
This cycle repeats: CPU burst, then I/O burst, then CPU burst again,
and so on. Finally, the program finishes with one last CPU burst and
then it's done.

Short vs. Long Bursts: Some programs spend a lot of time


"thinking" (long CPU bursts) and less time "waiting" (short I/O
bursts). These are called CPU-bound programs. Think of a
program that does complex calculations.

Short vs. Long Bursts: Other programs spend more time


"waiting" (long I/O bursts) and less time "thinking" (short CPU
bursts). These are called I/O-bound programs. Think of a
program that copies files from a hard drive.

Why this matters for scheduling: The operating system's


job is to manage the CPU. Knowing whether programs are
CPU-bound or I/O-bound helps the operating system choose
the best way to share the CPU among different programs. It
can try to balance things so that both types of programs run
efficiently.

CPU and I/O Burst Cycle:


 Process execution consists of a cycle of CPU execution and I/O
wait.
 Processes alternate between these two states.
 Process execution begins with a CPU burst, followed by an I/O
burst, then another CPU burst ... etc
 The last CPU burst will end with a system request to terminate
execution rather than with another I/O burst.
 The duration of these CPU burst have been measured.
 An I/O-bound program would typically have many short CPU
bursts, A CPU-bound program might have a few very long CPU
bursts.
 This can help to select an appropriate CPU-scheduling algorithm.

Preemptive Scheduling:
 Preemptive scheduling is used when a process switches from
running state to ready
state or from waiting state to ready state.
 The resources (mainly CPU cycles) are allocated to the process for
the limited amount of time and then is taken away, and the process
is again placed back in the ready queue if that process still has CPU
burst time remaining.
 That process stays in ready queue till it gets next chance to
execute.

Non-Preemptive Scheduling:
 Non-preemptive Scheduling is used when a process terminates, or
a process switches from running to waiting state.
 In this scheduling, once the resources (CPU cycles) is allocated to
a process, the process holds the CPU till it gets terminated or it
reaches a waiting state.
 In case of non-preemptive scheduling does not interrupt a process
running CPU in
middle of the execution.
 Instead, it waits till the process complete its CPU burst time and
then it can allocate the CPU to another process.

Dispatcher

The dispatcher module is responsible for handing over control of


the CPU to the process chosen by the CPU scheduler. This process
involves three main steps:

1. Switching context – Saving the current process's state and


loading the new process's state.
2. Switching to user mode – Changing the CPU mode so the
new process can run in a normal user environment.
3. Jumping to the correct location – Starting the new process
from where it left off or from the beginning.
Scheduling Criteria
User oriented:
 Turnaround time – amount of time to execute a particular
process
 Response time – amount of time it takes from when a request
was submitted until the first response is produced not output (for
time-sharing environment).
 Waiting time – amount of time a process has been waiting in
the ready queue
System oriented:
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per
time unit
Types of Scheduling Algorithm
(a) First Come First Serve (FCFS)
In FCFS Scheduling
 The process which arrives first in the ready queue is firstly
assigned the CPU.
 In case of a tie, process with smaller process id is executed first.
 It is always non-preemptive in nature.
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
Advantages-
 It is simple and easy to understand.
 It can be easily implemented using queue data structure.
 It does not lead to starvation.
Disadvantages-
 It does not consider the priority or burst time of the processes.
 It suffers from convoy effect i.e. processes with higher burst time
arrived before
the processes with smaller burst time.

Example 1:
Example 2:
Consider the processes P1, P2, P3 given in the below table,
arrives for execution in the same order, with Arrival Time 0,
and given Burst Time,

Example 3:
Consider the processes P1, P2, P3, P4 given in the below
table, arrives for execution in the same order, with given
Arrival Time and Burst Time

b) Shortest Job First (SJF)


 Process which have the shortest burst time are scheduled first.
 If two processes have the same bust time, then FCFS is used to
break the tie.
 This is a non-pre-emptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is
known in advance.
 Impossible to implement in interactive systems where required
CPU time is not
known.
 The processer should know in advance how much time process will
take.
 Pre-emptive mode of Shortest Job First is called as Shortest
Remaining Time
First (SRTF).
Advantages-
 SRTF is optimal and guarantees the minimum average waiting
time.
 It provides a standard for other algorithms since no other
algorithm performs
better than it.
Disadvantages-
 It can not be implemented practically since burst time of the
processes can not
be known in advance.
 It leads to starvation for processes with larger burst time.
 Priorities can not be set for the processes.
 Processes with larger burst time have poor response time.
Example-01:
Consider the set of 5 processes whose arrival time and burst time
are given below

Solution
If the CPU scheduling policy is SJF non-preemptive, calculate the
average waiting
time and average turnaround time.
Gantt Chart

Example-02:
Consider the set of 5 processes whose arrival time and burst time
are given below

If the CPU scheduling policy is SJF pre-emptive, calculate the


average waiting time and average turnaround time.
Solution
Gantt Chart

Now,
 Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7
unit
 Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8
unit
Example-03:
Consider the set of 6 processes whose arrival time and burst time
are given below

If the CPU scheduling policy is shortest remaining time first,


calculate the average
waiting time and average turnaround time.
Solution
Gantt Chart
Example-05:
Consider the set of 4 processes whose arrival time and burst time
are given below
(c) Round Robin Scheduling
 CPU is assigned to the process on the basis of FCFS for a fixed
amount of time.
 This fixed amount of time is called as time quantum or time slice.
 After the time quantum expires, the running process is preempted
and sent to the
ready queue.
 Then, the processor is assigned to the next arrived process.
 It is always preemptive in nature.

Advantages-
 It gives the best performance in terms of average response time.
 It is best suited for time sharing system, client server architecture
and
interactive system.
Disadvantages-
 It leads to starvation for processes with larger burst time as they
have to repeat
the cycle many times.
 Its performance heavily depends on time quantum.
 Priorities can not be set for the processes.
With decreasing value of time quantum,
 Number of context switch increases
 Response time decreases
 Chances of starvation decreases

Thus, smaller value of time quantum is better in terms of response


time.
With increasing value of time quantum,

 Number of context switch decreases


 Response time increases
 Chances of starvation increases

Thus, higher value of time quantum is better in terms of number of


context switch.

 With increasing value of time quantum, Round Robin Scheduling


tends to
become FCFS Scheduling.
 When time quantum tends to infinity, Round Robin Scheduling
becomes FCFS
Scheduling.
 The performance of Round Robin scheduling heavily depends on
the value of
time quantum.
 The value of time quantum should be such that it is neither too big
nor too
small.
Example-01:
Consider the set of 5 processes whose arrival time and burst time
are given below

If the CPU scheduling policy is Round Robin with time quantum = 2


unit, calculate the average waiting time and average turnaround
time.
Solution
Ready Queue-
P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Now,
 Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 =
8.6 unit
 Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-02:
Consider the set of 6 processes whose arrival time and burst time
are given below

If the CPU scheduling policy is Round Robin with time quantum = 2,


calculate the average
waiting time and average turnaround time.
Solution
Ready Queue-
P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Now,
 Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 /
6 = 10.84 unit
 Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 =
7.33 unit
Problem-03: Consider the set of 6 processes whose arrival time
and burst time are
given below

If the CPU scheduling policy is Round Robin with time quantum = 3,


calculate the
average waiting time and average turnaround time.
Solution
Ready Queue- P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4
Gantt chart

Now,
 Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 =
128 / 6 = 21.33 unit
 Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6
= 16 unit
(d) Priority Scheduling
 Out of all the available processes, CPU is assigned to the process
having the
highest priority.
 In case of a tie, it is broken by FCFS Scheduling.
 Priority Scheduling can be used in both preemptive and non-
preemptive mode.
The waiting time for the process having the highest priority will
always be zero in
preemptive mode.
 The waiting time for the process having the highest priority may
not be zero in non
preemptive mode.
Priority scheduling in preemptive and non-preemptive mode
behaves exactly same under
following conditions-
 The arrival time of all the processes is same
 All the processes become available
Advantages-
 It considers the priority of the processes and allows the important
processes to
run first.
 Priority scheduling in pre-emptive mode is best suited for real time
operating
system.
Disadvantages-
 Processes with lesser priority may starve for CPU.
 There is no idea of response time and waiting time.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time
are given below

If the CPU scheduling policy is priority non-preemptive, calculate the


average waiting time and average turnaround time. (Higher number
represents higher priority)

Solution
Gantt Chart

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Now,
 Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 =
8.2 unit
 Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2
unit
Problem-02: Consider the set of 5 processes whose arrival
time and burst time are
given below

If the CPU scheduling policy is priority preemptive, calculate


the average waiting
time and average turn around time. (Higher number
represents higher priority).
Solution
Gantt Chart

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Now,
 Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 =
7.6 unit
 Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6
unit
(d) Multilevel Queue Scheduling
A multi-level queue scheduling algorithm partitions the ready queue
into several separate
queues. The processes are permanently assigned to one queue,
generally based on some
property of the process, such as memory size, process priority, or
process type. Each queue has
its own scheduling algorithm.
Let us consider an example of a multilevel queue-scheduling
algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No
process in the batch queue,
for example, could run unless the queues for system processes,
interactive processes, and
interactive editing processes were all empty. If an interactive editing
process entered the ready
queue while a batch process was running, the batch process will be
pre-empted.

Deadlock
 Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for another
resource acquired by some other process.
 For example, in the below diagram, Process 1 is holding Resource
1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting
for resource 1.
Deadlock can arise if following four necessary conditions hold
simultaneously.
1. Mutual Exclusion: One or more than one resource are non-
sharable means Only one process can use at a time.
2. Hold and Wait: A process is holding at least one resource and
waiting for another
resources.
3. No Pre-emption: A resource cannot be taken from a process
unless the process releases the resource means the process which
once scheduled will be executed till the
completion and no other process can be scheduled by the scheduler
meanwhile.
4. Circular Wait: A set of processes are waiting for each other in
circular form means
All the processes must be waiting for the resources in a cyclic
manner so that the last
process is waiting for the resource which is being held by the first
process.
Difference between Starvation and Deadlock

Deadlock Handling
The various strategies for handling deadlock are-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
1. Deadlock Prevention
 Deadlocks can be prevented by preventing at least one of the four
required
conditions:
Mutual Exclusion
 Shared resources such as read-only files do not lead to deadlocks.
 Unfortunately, some resources, such as printers and tape drives,
require exclusive
access by a single process.
Hold and Wait
 To prevent this condition processes must be prevented from
holding one or more
resources while simultaneously waiting for one or more others.
No Preemption
 Preemption of process resource allocations can prevent this
condition of deadlocks,
when it is possible.
Circular Wait
 One way to avoid circular wait is to number all resources, and to
require that processes request resources only in strictly increasing
( or decreasing ) order.

When all four conditions are present simultaneously, a deadlock can


occur. In such a scenario, the processes involved may remain in a
state of deadlock indefinitely, unless intervention from the operating
system or other external factors breaks the deadlock.

Deadlock Prevention in Operating System

Deadlock prevention is a proactive strategy employed by operating


systems to eliminate or avoid the occurrence of deadlocks
altogether. By carefully managing resource allocation and process
execution, deadlock prevention techniques aim to break one or
more of the necessary conditions for deadlock formation. Here are
some commonly used deadlock prevention techniques in operating
systems:

Resource Allocation Denial:


One approach to preventing deadlocks is to deny resource allocation
requests that may lead to potential deadlocks. The operating
system carefully analyzes resource requests from processes and
determines if granting a request would result in a deadlock. If a
resource allocation request cannot be satisfied without violating the
Coffman conditions, the system denies the request, preventing the
formation of a deadlock. However, this approach may lead to low
resource utilization and can be overly restrictive in certain
situations.

Resource Ordering:
By imposing a strict ordering or hierarchy on resource allocation, the
possibility of circular waits can be eliminated. The operating system
assigns a unique numerical identifier to each resource and ensures
that processes can only request resources in increasing order. This
ensures that processes never enter a circular wait state, breaking
the circular wait condition and preventing deadlocks. However, this
technique requires prior knowledge of the resource requirements,
which may not always be feasible or practical.

Hold and Wait Prevention:


The hold and wait condition can be prevented by employing a
strategy where a process must request and acquire all required
resources simultaneously, rather than acquiring them incrementally.
This approach ensures that a process only begins execution when it
has obtained all necessary resources, eliminating the possibility of
holding a resource while waiting for another. However, this
technique may lead to resource underutilization and can be
restrictive in scenarios where resources are not immediately
available.

Preemptive Resource Allocation:


In certain situations, deadlocks can be prevented by introducing the
concept of resource preemption. If a process requests a resource
that is currently allocated to another process, the operating system
can preempt (temporarily revoke) the resource from the current
process and allocate it to the requesting process. Preemption
ensures that resources are efficiently utilized and prevents
deadlocks caused by the hold and wait condition. However, careful
consideration is required to avoid indefinite resource preemption
and ensure fairness in resource allocation.

Spooling and I/O Scheduling:


Deadlocks can occur due to resource contention during input/output
(I/O) operations. Spooling and I/O scheduling techniques can help
prevent I/O-related deadlocks. Spooling involves storing I/O requests
in a queue, allowing processes to proceed with other tasks while the
I/O operation is being executed. Efficient I/O scheduling algorithms
ensure fair access to I/O resources, minimizing the chances of
deadlocks caused by resource contention during I/O operations.

By implementing these deadlock prevention techniques, operating


systems can significantly reduce the likelihood of deadlocks
occurring. However, each technique comes with its own trade-offs
and considerations, and the choice of prevention strategy depends
on the specific requirements and constraints of the system at hand

2. Deadlock Avoidance
 In deadlock avoidance, the operating system checks whether the
system is in safe state or in unsafe state at every step which the
operating system performs.
 The process continues until the system is in safe state.
 Once the system moves to unsafe state, the OS has to backtrack
one step.
 In simple words, The OS reviews each allocation so that the
allocation doesn't cause
the deadlock in the system.

In complex systems involving multiple processes and shared

resources, the potential for deadlocks arises when processes wait

for each other to release resources, causing a standstill. The

resulting deadlocks can cause severe issues in computer systems,

such as performance degradation and even system crashes. To

prevent such problems, the technique of deadlock avoidance is

employed. It entails scrutinizing the requests made by processes for

resources and evaluating the available resources to determine if the

grant of such requests would lead to a deadlock. In cases where

granting a request would result in a deadlock, the system denies the

request. Deadlock avoidance is a crucial aspect of operating system

design and plays an indispensable role in upholding the

dependability and steadiness of computer systems.

Safe State and Unsafe State

A safe state refers to a system state where the allocation of

resources to each process ensures the avoidance of deadlock. The

successful execution of all processes is achievable, and the

likelihood of a deadlock is low. The system attains a safe state when


a suitable sequence of resource allocation enables the successful

completion of all processes.

Conversely, an unsafe state implies a system state where a

deadlock may occur. The successful completion of all processes is

not assured, and the risk of deadlock is high. The system is insecure

when no sequence of resource allocation ensures the successful

execution of all processes.

Deadlock Avoidance Algorithms

When resource categories have only single instances of their

resources, Resource- Allocation Graph Algorithm is used. In this

algorithm, a cycle is a necessary and sufficient condition for

deadlock.

When resource categories have multiple instances of their


resources, Banker’s Algorithm is used. In this algorithm, a cycle is a

necessary but not a sufficient condition for deadlock.

Explore our latest online courses and learn new skills at your

own pace. Enroll and become a certified expert to boost your career.

Resource-Allocation Graph Algorithm

Resource Allocation Graph (RAG) is a popular technique used for

deadlock avoidance. It is a directed graph that represents the


processes in the system, the resources available, and the

relationships between them. A process node in the RAG has two

types of edges, request edges, and assignment edges. A request

edge represents a request by a process for a resource, while an

assignment edge represents the assignment of a resource to a

process.

To determine whether the system is in a safe state or not, the RAG

is analyzed to check for cycles. If there is a cycle in the graph, it

means that the system is in an unsafe state, and granting a

resource request can lead to a deadlock. In contrast, if there are no

cycles in the graph, it means that the system is in a safe state, and

resource allocation can proceed

without causing a deadlock.

The RAG technique is straightforward to implement and provides a

clear visual representation of the processes and resources in the

system. It is also an effective way to identify the cause of a

deadlock if one occurs. However, one of the main limitations of the

RAG technique is that it assumes that all resources in the system

are allocated at the start of the analysis. This assumption can be


unrealistic in practice, where resource allocation can change

dynamically during system operation. Therefore, other techniques

such as the Banker's Algorithm are used to overcome this limitation.

Banker’s Algorithm

The banker's algorithm is a deadlock avoidance algorithm used in

operating systems. It was proposed by Edsger Dijkstra in 1965. The

banker's algorithm works on the principle of ensuring that the

system has enough resources to allocate to each process so that the

system never enters a deadlock state. It works by keeping track of

the total number of resources available in the system and the

number of resources allocated to each process.

The algorithm is used to prevent deadlocks that can occur when

multiple processes are competing for a finite set of resources. The

resources can be of different types such as memory, CPU cycles, or

I/O devices. It works by first analysing the current state of the

system and determining if granting a resource request from a


process will result in a safe state. A state is considered safe if there

is at least one sequence of resource allocations that can satisfy all

processes without causing a deadlock.

The Banker's algorithm assumes that each process declares its

maximum resource requirements upfront. Based on this information,

the algorithm allocates resources to each Resource-Allocation Graph

process such that the total number of allocated resources never

exceeds the total number of available resources. The algorithm does

not grant access to resources that could potentially lead to a

deadlock situation. The Banker's algorithm uses a matrix called the


"allocation matrix" to keep track of the resources allocated to each

process, and a "request matrix" to keep track of the resources

requested by each process. It also uses a "need matrix" to represent

the resources that each process still needs to complete its

execution.

To determine if a request can be granted, the algorithm checks if

there is enough available resources to satisfy the request, and then

checks if granting the request will still result in a safe state. If the

request can be granted safely, the algorithm grants the resources

and updates the allocation matrix, request matrix, and need matrix

accordingly. If the request cannot be granted safely, the process

must wait until sufficient resources become available.

1. Initialize the system

Define the number of processes and resource types.

Define the total number of available resources for each resource

type.

Create a matrix called the "allocation matrix" to represent the

current resource allocation for each process.

Create a matrix called the "need matrix" to represent the remaining

resource needs for each process.

2. Define a request
A process requests a certain number of resources of a particular

type.

3. Check if the request can be granted

Check if the requested resources are available.

If the requested resources are not available, the process must wait.

If the requested resources are available, go to the next step.

4. Check if the system is in a safe state

Simulate the allocation of the requested resources to the process.

Check if this allocation results in a safe state, meaning there is a

sequence of allocations that can satisfy all processes without

leading to a deadlock.

If the state is safe, grant the request by updating the allocation

matrix and the need matrix.


If the state is not safe, do not grant the request and let the process

wait.

Release the Resources

When a process has finished its execution, releases its allocated

resources by updating the allocation matrix and the need matrix.

The above steps are repeated for each resource request made by

any process in the system. Overall, the Banker's algorithm is an

effective way to avoid deadlocks in resource constrained systems by


carefully managing resource allocations and predicting potential

conflicts before they arise.

Conclusion

Deadlock avoidance is an important concept in operating system

design that is used to prevent the occurrence of deadlocks, which

can result in system crashes and reduced performance.

By using various techniques such as resource allocation graphs, and

banker's algorithm, operating systems can ensure that resources

are allocated in a way that prevents deadlocks from occurring.

While deadlock avoidance can be an effective way to prevent

deadlocks, it can also be costly in terms of system resources and

may result in reduced system performance. As a result, it is

important for operating system designers to carefully balance the

benefits of deadlock avoidance with the costs and potential

drawbacks.

Overall, deadlock avoidance is an essential aspect of modern


operating system design that helps to ensure the stability and

reliability of computer systems. By understanding the principles and

techniques of deadlock avoidance, system administrators and

developers can build more robust and resilient systems that can

better withstand the challenges of modern computing

environments.

3. Deadlock detection and recovery


 This strategy involves waiting until a deadlock occurs.
 After deadlock occurs, the system state is recovered.
 The main challenge with this approach is detecting the deadlock.
Deadlock Detection an
Aspect Deadlock Prevention in OS
OS
Allow deadlocks to occu
Objective Prevent deadlocks from occurring.
and recover from
Requires careful resource allocation to
Resources are allocat
Resource avoid circular wait, hold and wait, no
stringent conditions; pr
Allocation preemption, and mutual exclusion
request resources as
conditions.
Typically more complex to implement, as Less complex to implemen
Complexity it requires stricter allocation policies and on detecting and resolvi
resource management. after they occ
Detection adds some ov
Can affect system performance due to
Performance recovery may involve
resource allocation constraints and
Impact processes, which can im
potential delays in acquiring resources.
performance
May lead to underutilization of resources Strives to maximize resou
Resource
since processes might not get all the potentially at the cost of
Utilization
resources they need. of deadlocks
Processes may be allowe
Processes may be blocked from starting
Process if they can't acquire all r
due to resource unavailability, leading to
Blocking they might later face
possible delays.
situations.
Generally involves resource preemption, Preemption of resources is
Preemption which can be complex and might lead to in most cases, allowing
inefficiencies. process execut
Proactively prevents deadlocks before Reactively detects deadlo
Proactive vs.
they occur by imposing constraints on happen and takes actio
Reactive
resource allocation. them.
System Can improve system responsiveness by May require system down
Responsiven reducing the chances of deadlocks and termination for recover
ess the need for recovery. overall responsiv
May not be suitable for all systems or Suitable for a wide range
Compatibility applications, especially those with applications since it hand
dynamic resource needs. reactively.

Deadlock Detection

Deadlock detection in OS is a critical concept in computer science


and operating systems that deals with the potential problem of
deadlocks in concurrent systems. A deadlock occurs when two or
more processes are unable to proceed with their execution because
each process is waiting for a resource that is held by another
process within the same system. This situation results in a standstill
where no process can make progress, effectively halting the
system's functionality.

Deadlock Detection in OS Strategies:


 Resource Allocation Graph:
This method represents the resources and processes as nodes
in a graph, with edges representing resource requests and
allocations. Deadlocks can be detected by identifying cycles in
the graph.
 Wait-Die and Wound-Wait Schemes:
These schemes are used in resource management systems
where older processes can preempt resources from younger
processes (Wound-Wait) or vice versa (Wait-Die). These
strategies help to prevent potential deadlocks.
 Banker's Algorithm:
This algorithm is used to determine if a resource allocation will
lead to a safe state, where a safe state ensures that all
processes can complete their execution without getting stuck
in a deadlock.
 Timeout Mechanisms:
Processes are given a certain time limit to complete their
execution. If they fail to complete within the time limit, their
allocated resources are released, preventing potential
deadlocks.
 Periodic Checking:
In this approach, the system periodically checks for potential
deadlocks by analyzing the state of resources and processes.
If a deadlock is detected, appropriate actions are taken.

Advantages of Deadlock Detection in OS:

 Prevents Hangs:
Detects and resolves deadlocks, preventing system freezes.
 Optimizes Allocation:
Identifies inefficient resource usage and reallocates for
efficiency.
 Dynamic Allocation:
Responds to real-time demands, improving responsiveness.
 Minimizes Impact:
Reduces disruptions, enhancing user experience.
 Predictable Behavior:
Enables better management and predictability.
 Monitors Health:
Provides insights, aids performance optimization.
 Reclaims Resources:
Frees locked resources, increasing availability.

Limitations of Deadlock Detection:

 Detection Overhead:
Adds computational burden due to continuous monitoring.
 Delayed Resolution:
Only responds after deadlock occurs, not preventing it.
 Resource Usage:
Consumes additional system resources for monitoring.
 Complexity:
Requires careful design and tuning, adding complexity.
 Potential False Positives:
Can detect non-deadlock situations as deadlocks.

Deadlock Recovery

Deadlock recovery is a process in computer science and operating


systems that aims to resolve or mitigate the effects of a deadlock
after it has been detected. Deadlocks are situations where multiple
processes are stuck and unable to proceed because each process is
waiting for a resource held by another process. Recovery strategies
are designed to break this deadlock and allow the system to
continue functioning.

Recovery Strategies:

 Process Termination:
One way to recover from a deadlock is to terminate one or
more of the processes involved in the deadlock. By releasing
the resources held by these terminated processes, the
remaining processes may be able to continue executing.
However, this approach should be used cautiously, as
terminating processes could lead to loss of data or incomplete
transactions.
 Resource Preemption:
Resources can be forcibly taken away from one or more
processes and allocated to the waiting processes. This
approach can break the circular wait condition and allow the
system to proceed. However, resource preemption can be
complex and needs careful consideration to avoid disrupting
the execution of processes.
 Process Rollback:
In situations where processes have checkpoints or states
saved at various intervals, a process can be rolled back to a
previously saved state. This means that the process will
release all the resources acquired after the saved state, which
can then be allocated to other waiting processes. Rollback,
though, can be resource-intensive and may not be feasible for
all types of applications.
 Wait-Die and Wound-Wait Schemes:
As mentioned in the Deadlock Detection in OS section, these
schemes can also be used for recovery. Older processes can
preempt resources from younger processes (Wound-Wait),
or younger processes can be terminated if they try to access
resources held by older processes (Wait-Die).
 Kill the Deadlock:
In some cases, it might be possible to identify a specific
process that is causing the deadlock and terminate it. This is
typically a last resort option, as it directly terminates a
process without any complex recovery mechanisms.

Advantages of Deadlock Recovery:

 Resumes Processes:
Terminates deadlock-involved processes, allowing others to
continue.
 Reclaims Resources:
Releases resources, improving allocation.
 Minimizes Downtime:
Swiftly resolves deadlocks, reducing system downtime.
 User Transparent:
Shields users from deadlock complexities.
 Ensures Stability:
Prevents prolonged hangs, enhances system stability.
 Optimizes Utilization:
Redistributes resources for efficient use.
 Automated Resolution:
Swift, automated recovery from deadlocks.
 Boosts Fault Tolerance:
Contributes to system reliability and fault tolerance.

Limitations of Deadlock Recovery:

 Process Termination:
May terminate processes, affecting user tasks.
 Resource Waste:
Terminated processes release resources, causing waste.
 User Impact:
Interruption to users due to process termination.
 Unfairness:
Selecting processes to terminate may seem arbitrary.
 Automated Risks:
Automated recovery decisions might not be optimal.

4. Deadlock Ignorance
 This strategy involves ignoring the concept of deadlock and
assuming as if it does not
exist.
 This strategy helps to avoid the extra overhead of handling
deadlock.
 Windows and Linux use this strategy and it is the most widely
used method.

You might also like