Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
39 views56 pages

Chapter 4

The document discusses CPU scheduling algorithms and deadlocks. It explains concepts like scheduling criteria, types of scheduling algorithms including FCFS, SJF, priority scheduling and multilevel queue scheduling. It also covers necessary conditions for deadlocks and ways to handle them.

Uploaded by

tanksalebhoomi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views56 pages

Chapter 4

The document discusses CPU scheduling algorithms and deadlocks. It explains concepts like scheduling criteria, types of scheduling algorithms including FCFS, SJF, priority scheduling and multilevel queue scheduling. It also covers necessary conditions for deadlocks and ways to handle them.

Uploaded by

tanksalebhoomi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Unit 4

CPU scheduling &


Algorithms
[14 Marks]
Teaching and Examination Scheme
Course Outcome

CO 4: Apply scheduling algorithms to calculate turn around


time and average waiting time.
Unit Outcomes

•Justify need and objective of job Scheduling criteria


•Explain process of allocating CPU to given process
•Calculate Turn around time and average waiting time
for a given scheduling algorithm
•Explain conditions leading to Deadlock
What will we Learn

▪Scheduling types- scheduling objectives, CPU and IO burst


cycle, pre-emptive and non pre-emptive scheduling,
scheduling criteria.
▪Types of scheduling algorithms- first Come First serve,
shortest job first, shortest remaining time, round Robin,
priority scheduling, multilevel queue scheduling.
▪ Deadlock- system models, necessary condition leading to
deadlocks, deadlock handling- preventions and avoidance
What is CPU Scheduling?
• CPU scheduling is a process which allows one process to use the CPU
while the execution of another process is on hold (in waiting state) due
to unavailability of any resource like I/O etc, thereby making full use of
CPU.
• The aim of CPU scheduling is to make the system efficient, fast and fair.
• Whenever the CPU becomes idle, the operating system must select one
of the processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler (or
CPU scheduler).
• The scheduler selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them.
• Another component involved in CPU scheduling function is the
dispatcher.
• Dispatcher module gives control of the CPU to the process selected by
the CPU scheduler
CPU–I/O Burst Cycle
• Maximum CPU utilization obtained
with multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
• Process alternates between these
two states.
• Process execution begins with CPU
burst cx
• CPU burst followed by I/O burst
• CPU burst distribution is of main
concern
CPU Scheduler

• The CPU scheduler selects from among the processes in ready


queue, and allocates a CPU core to one of them
• Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• For situations 1 and 4, there is no choice in terms of scheduling. A
new process (if one exists in the ready queue) must be selected for
execution.
• For situations 2 and 3, however, there is a choice.
Preemptive and Nonpreemptive
Scheduling
• When scheduling takes place only under circumstances 1 and 4, the
scheduling scheme is nonpreemptive.
• Otherwise, it is preemptive.
• Under Nonpreemptive scheduling, once the CPU has been allocated
to a process, the process keeps the CPU until it releases it either by
terminating or by switching to the waiting state.
• Virtually all modern operating systems including Windows, MacOS,
use nonpreemptive scheduling algorithms.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – If the CPU is busy in executing processes then work is
being done. Measure of work is the number of processes that
complete their execution per time unit
• Turnaround time – amount of time to execute a particular process.
Turn around time is the interval from the time of submission of
process to the time of completion of process.
• Waiting time – amount of time a process has been waiting in the
ready queue.
• Response time – amount of time it takes from when a request was
submitted until the first response is produced.
Scheduling Algorithm
Optimization Criteria
• For any algorithm , to be best , following Scheduling Algorithm
Optimization Criteria are followed:

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
CPU Scheduling Algorithms
1. FCFS(First Come First Served Scheduling)-Non
Preemptive
2. SJF(Shortest Job First)-Non Preemptive
3. SRTN(Shortest Remaining Time)-Preemptive SJF
4. RR(Round Robin)
5. Priority Scheduling
6. Multilevel Queue Scheduling
Scheduling algorithm Basics
• Below are different time with respect to a process, which need to be
calculated during every algorithm.

• Arrival Time: Time at which the process arrives in the ready queue.

• Completion Time: Time at which process completes its execution.

• Burst Time: Time required by a process for CPU execution.

• Turn Around Time: Time Difference between completion time and


arrival time.
Turn Around Time = Completion Time – Arrival Time
Scheduling algorithm Basics
• For all algorithms: To solve a sum:

• Step 1: Plot Gantt chart

• Step 2: Find out Completion time of all process from chart

• Step 3: Calculate Turn around time as Completion Time – Arrival Time


(which is given in sum)

• Step 4: Calculate Waiting Time as Turn around time – Burst time


(again which is given in sum)
First- Come, First-Served (FCFS)
Scheduling
(Non-preemptive)
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

• Waiting time for P1 = 0ms; P2 = 24ms; P3 = 27ms


• Average waiting time: (0 + 24 + 27)/3 = 17ms
Calculate Average Waiting Time
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:

• Waiting time for P1 = 6ms; P2 = 0ms; P3 = 3ms


• Average waiting time: (6 + 0 + 3)/3 = 3ms
• Much better than previous case
• Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF) Scheduling

• Associate with each process the length of its next CPU burst
• Use these lengths to schedule the process with the shortest time
• SJF is optimal – gives minimum average waiting time for a given set
of processes
• Preemptive version called shortest-remaining-time-first
• How do we determine the length of the next CPU burst?
• Could ask the user
• Estimate
Example of SJF

ProcessArriva l Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example of Shortest-remaining-time-first
• Now we add the concepts of varying arrival times and preemption to the anal
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5


Round Robin (RR)

• Each process gets a small unit of CPU time (time quantum q), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is
q, then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large ⇒ FIFO
• q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
Example of RR with Time Quantum

Process Burst Time


P1 24
P2 3
P3 3
• The Gantt chart is:

• Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds
Time Quantum and Context Switch Time
Priority Scheduling

• A priority number (integer) is associated with each process

• The CPU is allocated to the process with the highest priority (smallest
integer ≡ highest priority)
• Preemptive
• Nonpreemptive

• SJF is priority scheduling where priority is the inverse of predicted


next CPU burst time

• Problem ≡ Starvation – low priority processes may never execute

• Solution ≡ Aging – as time progresses increase the priority of the


process
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Priority scheduling Gantt Chart

• Average waiting time = 8.2


Priority Scheduling withRound-Robin
ProcessA arri Burst TimeT Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
• Run the process with the highest priority. Processes with the same priority
run round-robin
Time Quantam = 2ms
• Gantt Chart with time quantum = 2
Multilevel Queue
• With priority scheduling, have separate queues for each priority.
• Schedule the process in the highest-priority queue!
Multilevel Queue

• Prioritization based upon process type


Example of Multilevel Feedback Queue
• Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling
• A new process enters queue Q0 which is
served RR
• When it gains CPU, the process receives 8
milliseconds
• If it does not finish in 8 milliseconds, it is
moved to queue Q1
• At Q1 the process is again served RR and
receives 16 additional milliseconds
• If it still does not complete, it is preempted
and moved to queue Q2
System Model

• System consists of resources


• Resource types R1, R2, . . ., Rm
• CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as follows:
• request
• use
• release
Necessary Conditions for Deadlock
Deadlock can arise if four conditions hold simultaneously.

• Mutual exclusion: only one process at a time can use a resource


• Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes
• No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task
• Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0
is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting
for a resource that is held by P0.
Resource-Allocation Graph

A set of vertices V and a set of edges E.

• V is partitioned into two types:


• P = {P1, P2, …, Pn}, the set consisting of all the processes in the system

• R = {R1, R2, …, Rm}, the set consisting of all resource types in the system

• request edge – directed edge Pi → Rj

• assignment edge – directed edge Rj → Pi


Resource Allocation Graph Example
• One instance of R1
• Two instances of R2
• One instance of R3
• Three instance of R4
• T1 holds one instance of R2 and is waiting
for an instance of R1
• T2 holds one instance of R1, one instance
of R2, and is waiting for an instance of R3
• T3 is holds one instance of R3
Resource Allocation Graph with a Deadlock
Methods for Handling Deadlocks

• Ensure that the system will never enter a deadlock state:


• Deadlock prevention
• Deadlock avoidance
• Allow the system to enter a deadlock state and then recover
• Ignore the problem and pretend that deadlocks never occur in the
system.
Deadlock Prevention

Eliminate one of the four necessary conditions for deadlock:

• Mutual Exclusion – not required for sharable resources (e.g.,


read-only files); must hold for non-sharable resources
• Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources
• Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has
none allocated to it.
• Low resource utilization; starvation possible
Deadlock Prevention (Cont.)

• No Preemption:
• If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released
• Preempted resources are added to the list of resources for which the process
is waiting
• Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting
• Circular Wait:
• Impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration
Deadlock Avoidance

Requires that the system has some additional a priori information


available

• Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
• The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-wait
condition
• Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
Safe State

• When a process requests an available resource, system must decide


if immediate allocation leaves the system in a safe state
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL
the processes in the systems such that for each Pi, the resources
that Pi can still request can be satisfied by currently available
resources + resources held by all the Pj, with j < I
• That is:
• If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished
• When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
• When Pi terminates, Pi +1 can obtain its needed resources, and so on
Basic Facts

• If a system is in safe state ⇒ no deadlocks

• If a system is in unsafe state ⇒ possibility of deadlock

• Avoidance ⇒ ensure that a system will never enter an unsafe state.


Safe, Unsafe, Deadlock State
Avoidance Algorithms

•Single instance of a resource type


•Use a modified resource-allocation graph

•Multiple instances of a resource type


• Use the Banker’s Algorithm
Modified Resource-Allocation Graph
Scheme
• Claim edge Pi --> Rj indicates that process Pi may request resource Rj
• Request edge Pi → Rj indicates that process Pi requests resource Rj
• Claim edge converts to request edge when a process requests a resource
• Assignment edge Rj → Pi indicates that resource Rj was allocated to
process Pi
• Request edge converts to an assignment edge when the resource is
allocated to the process
• When a resource is released by a process, assignment edge
reconverts to a claim edge
• Resources must be claimed a priori in the system
Resource-Allocation Graph
Resource-Allocation Graph Algorithm

• Suppose that process Pi requests a resource Rj


• The request can be granted only if converting the request edge to an
assignment edge does not result in the formation of a cycle in the
resource allocation graph
Banker’s Algorithm

• Multiple instances of resources


• Each process must a priorly claim maximum use
• When a process requests a resource it may have to wait
• When a process gets all its resources it must return them in a finite
amount of time

•https://www.youtube.com/watch?v=bYFVbzLLxfY
•Watch the video
Data Structures for the Banker’s
Algorithm
• Let n = number of processes, and m = number of resources types.
• Available: Vector of length m. If available [j] = k, there are k
instances of resource type Rj available

• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at


most k instances of resource type Rj

• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently


allocated k instances of Rj

• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more


instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm

1. Let Work and Finish be vectors of length m and n, respectively.


Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1

2. Find an i such that both:


(a) Finish [i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4

3. Work = Work + Allocationi


Finish[i] = true
go to step 2

4. If Finish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process
Pi
Requesti = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj
1. If Requesti ≤ Needi go to step 2. Otherwise, raise error condition,
since process has exceeded its maximum claim
2. If Requesti ≤ Available, go to step 3. Otherwise Pi must wait,
since resources are not available
3. Pretend to allocate requested resources to Pi by modifying the
state as follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
• If safe ⇒ the resources are allocated to Pi
• If unsafe ⇒ Pi must wait, and the old resource-allocation state is
restored
Example of Banker’s Algorithm

• 5 processes P0 through P4; 3 resource types:

A (10 instances), B (5 instances), and C (7 instances)


• Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Example (Cont.)

• The content of the matrix Need is defined to be Max – Allocation

Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
Example (Cont.)

• Snapshot at time T0:

Allocation Max Available Need


ABC ABC ABC ABC
P0 010 753 332 743
P1 200 322 122
P2 302 902 600
P3 211 222 011
P4 002 433 431

• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria
Example: P1 Request (1,0,2)
• Check that Request ≤ Available (that is, (1,0,2) ≤ (3,3,2) ⇒ true

Allocation Need Available


ABC ABC ABC
P0 010 743 230
P1 302 020
P2 302 600
P3 211 011
P4 002 431

• Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2>
satisfies safety requirement

• Can request for (3,3,0) by P4 be granted?

• Can request for (0,2,0) by P0 be granted?


Example (Cont.)

• The content of the matrix Need is defined to be Max – Allocation

Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431

• The system is in a safe state since the sequence < P1, P3, P4, P2, P0>
satisfies safety criteria
Resource-Allocation Graph Scheme

•Claim edge Pi → Rj indicated that process Pj may request


resource Rj; represented by a dashed line
• Claim edge converts to request edge when a process requests a
resource
• Request edge converted to an assignment edge when the resource is
allocated to the process
• When a resource is released by a process, assignment edge
reconverts to a claim edge
• Resources must be claimed a priori in the system
THANK YOU

You might also like