Unit II – CPU Scheduling
Outline
Scheduling Objectives
Levels of Scheduling
Scheduling Criteria
Scheduling Algorithms
FCFS, Shortest Job First, Priority, Round Robin,
Multilevel
Multiple Processor Scheduling
Real-time Scheduling
Algorithm Evaluation
Scheduling Objectives
Enforcement of fairness
in allocating resources to processes
Enforcement of priorities
Make best use of available system resources
Give preference to processes holding key
resources.
Give preference to processes exhibiting good
behavior.
Degrade gracefully under heavy loads.
Program Behavior Issues
I/O boundedness
short burst of CPU before blocking for I/O
CPU boundedness
extensive use of CPU before blocking for I/O
Urgency and Priorities
Frequency of preemption
Process execution time
Time sharing
amount of execution time process has already received.
Basic Concepts
Maximum CPU utilization obtained with
multiprogramming.
CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU execution
and I/O wait.
CPU Burst Distribution
Levels of Scheduling
High Level Scheduling or Job Scheduling
Selects jobs allowed to compete for CPU and other
system resources.
Intermediate Level Scheduling or Medium
Term Scheduling
Selects which jobs to temporarily suspend/resume to
smooth fluctuations in system load.
Low Level (CPU) Scheduling or Dispatching
Selects the ready process that will be assigned the CPU.
Ready Queue contains PCBs of processes.
Levels of Scheduling(cont.)
CPU Scheduler
Selects from among the processes in
memory that are ready to execute, and
allocates the CPU to one of them.
Non-preemptive Scheduling
Once CPU has been allocated to a process, the process
keeps the CPU until
Process exits OR
Process switches to waiting state
Preemptive Scheduling
Process can be interrupted and must release the CPU.
Need to coordinate access to shared data
CPU Scheduling Decisions
CPU scheduling decisions may take place
when a process:
switches from running state to waiting state
switches from running state to ready state
switches from waiting to ready
terminates
Scheduling under 1 and 4 is non-preemptive.
All other scheduling is preemptive.
CPU scheduling decisions
new admitted
new exit terminated
interrupt
running
ready
Scheduler
I/O or dispatch
event I/O or
completion event wait
waiting
Dispatcher
Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler. This involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart
that program
Dispatch Latency:
time it takes for the dispatcher to stop one process and
start another running.
Dispatcher must be fast.
Scheduling Criteria
CPU Utilization
Keep the CPU and other resources as busy as possible
Throughput
# of processes that complete their execution per time
unit.
Turnaround time
amount of time to execute a particular process from its
entry time.
Scheduling Criteria (cont.)
Waiting time
amount of time a process has been waiting in the ready
queue.
Response Time (in a time-sharing
environment)
amount of time it takes from when a request was
submitted until the first response is produced, NOT
output.
Optimization Criteria
Max CPU Utilization
Max Throughput
Min Turnaround time
Min Waiting time
Min response time
First Come First Serve (FCFS)
Scheduling
Policy: Process that requests the CPU FIRST
is allocated the CPU FIRST.
FCFS is a non-preemptive algorithm.
Implementation - using FIFO queues
incoming process is added to the tail of the queue.
Process selected for execution is taken from head of
queue.
Performance metric - Average waiting time in
queue.
Gantt Charts are used to visualize schedules.
First-Come, First-Served(FCFS)
Scheduling
Example Suppose the arrival
order for the processes
Process Burst Time
P1 24
is
P2 3 P1, P2, P3
P3 3
Waiting time
P1 = 0;
Gantt Chart for Schedule
P2 = 24;
P1 P2 P3 P3 = 27;
0 24 27 30 Average waiting time
(0+24+27)/3 = 17
FCFS Scheduling (cont.)
Example Suppose the arrival order
for the processes is
P2, P3, P1
Process Burst Time
P1 24 Waiting time
P2 3 P1 = 6; P2 = 0; P3 = 3;
P3 3
Average waiting time
Gantt Chart for Schedule
(6+0+3)/3 = 3 , better..
Convoy Effect:
P2 P3 P1
short process behind long
0 3 6 process, e.g. 1 CPU bound
30 process, many I/O bound
processes.
Shortest-Job-First(SJF) Scheduling
Associate with each process the length of its next
CPU burst. Use these lengths to schedule the
process with the shortest time.
Two Schemes:
Scheme 1: Non-preemptive
Once CPU is given to the process it cannot be preempted
until it completes its CPU burst.
Scheme 2: Preemptive
If a new CPU process arrives with CPU burst length less
than remaining time of current executing process, preempt.
Also called Shortest-Remaining-Time-First (SRTF).
SJF is optimal - gives minimum average waiting time for
a given set of processes.
Non-Preemptive SJF Scheduling
Example
Process Arrival TimeBurst Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Gantt Chart for Schedule
P1 P3 P2 P4
0 7 8 12 16
Average waiting time =
(0+6+3+7)/4 = 4
Preemptive SJF Scheduling(SRTF)
Example
Process Arrival TimeBurst Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Gantt Chart for Schedule
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Average waiting time =
(9+1+0+2)/4 = 3
Determining Length of Next CPU
Burst
One can only estimate the length of burst.
Use the length of previous CPU bursts and
perform exponential averaging.
tn = actual length of nth burst
n+1 =predicted value for the next CPU burst
= 0, 0 1
Define
n+1 = tn + (1- ) n
Exponential Averaging(cont.)
=0
n+1 = n; Recent history does not count
= 1
n+1 = tn; Only the actual last CPU burst counts.
Similarly, expanding the formula:
n+1 = tn + (1-) tn-1 + …+ j
(1-)^j tn-j + …
(1-)^(n+1) 0
Each successive term has less weight than its predecessor.
Priority Scheduling
A priority value (integer) is associated with
each process. Can be based on
Cost to user
Importance to user
Aging
%CPU time used in last X hours.
CPU is allocated to process with the highest
priority.
Preemptive
Nonpreemptive
Priority Scheduling (cont.)
SJN is a priority scheme where the priority is
the predicted next CPU burst time.
Problem
Starvation!! - Low priority processes may never execute.
Solution
Aging - as time progresses increase the priority of the
process.
Round Robin (RR)
Each process gets a small unit of CPU time
Time quantum usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and
added to the end of the ready queue.
n processes, time quantum = q
Each process gets 1/n CPU time in chunks of at most q
time units at a time.
No process waits more than (n-1)q time units.
Performance
Time slice q too large - FIFO behavior
Time slice q too small - Overhead of context switch is
too expensive.
Heuristic - 70-80% of jobs block within timeslice
Round Robin Example
Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
Gantt Chart for Schedule
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
Typically, higher average turnaround time than SRTF, but better response
Multilevel Queue
Ready Queue partitioned into separate queues
Example: system processes, foreground (interactive), background
(batch), student processes….
Each queue has its own scheduling algorithm
Example: foreground (RR), background(FCFS)
Processes assigned to one queue permanently.
Scheduling must be done between the queues
Fixed priority - serve all from foreground, then from background.
Possibility of starvation.
Time slice - Each queue gets some CPU time that it schedules -
e.g. 80% foreground(RR), 20% background (FCFS)
Multilevel Queues
Multilevel Feedback Queue
Multilevel Queue with priorities
A process can move between the queues.
Aging can be implemented this way.
Parameters for a multilevel feedback queue
scheduler:
number of queues.
scheduling algorithm for each queue.
method used to determine when to upgrade a process.
method used to determine when to demote a process.
method used to determine which queue a process will enter
when that process needs service.
Multilevel Feedback Queues
Example: Three Queues -
Q0 - time quantum 8 milliseconds (RR)
Q1 - time quantum 16 milliseconds (RR)
Q2 - FCFS
Scheduling
New job enters Q0 - When it gains CPU, it receives 8
milliseconds. If job does not finish, move it to Q1.
At Q1, when job gains CPU, it receives 16 more
milliseconds. If job does not complete, it is preempted and
moved to queue Q2.
Multilevel Feedback Queues
Multiple-Processor Scheduling
CPU scheduling becomes more complex
when multiple CPUs are available.
Have one ready queue accessed by each CPU.
Self scheduled - each CPU dispatches a job from ready Q
Master-Slave - one CPU schedules the other CPUs
Homogeneous processors within
multiprocessor.
Permits Load Sharing
Asymmetric multiprocessing
only 1 CPU runs kernel, others run user programs
alleviates need for data sharing
Real-Time Scheduling
Hard Real-time Computing -
required to complete a critical task within a guaranteed
amount of time.
Soft Real-time Computing -
requires that critical processes receive priority over less
fortunate ones.
Types of real-time Schedulers
Periodic Schedulers - Fixed Arrival Rate
Demand-Driven Schedulers - Variable Arrival Rate
Deadline Schedulers - Priority determined by deadline
…..
Issues in Real-time Scheduling
Dispatch Latency
Problem - Need to keep dispatch latency small, OS may
enforce process to wait for system call or I/O to complete.
Solution - Make system calls preemptible, determine safe
criteria such that kernel can be interrupted.
Priority Inversion and Inheritance
Problem: Priority Inversion
Higher Priority Process needs kernel resource currently
being used by another lower priority process..higher
priority process must wait.
Solution: Priority Inheritance
Low priority process now inherits high priority until it has
completed use of the resource in question.
Real-time Scheduling - Dispatch
Latency
Algorithm Evaluation
Deterministic Modeling
Takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Too specific,
requires exact knowledge to be useful.
Queuing Models and Queuing Theory
Use distributions of CPU and I/O bursts. Knowing arrival and
service rates - can compute utilization, average queue length,
average wait time etc…
Little’s formula - n = W where n is the average queue length,
is the avg. arrival rate and W is the avg. waiting time in queue.
Other techniques: Simulations, Implementation