Chapter 5: CPU Scheduling
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne
Outline
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multi-Processor Scheduling
Operating System Concepts – 10th Edition 5.2 Silberschatz, Galvin and Gagne
Objectives
Describe various CPU scheduling algorithms
Assess CPU scheduling algorithms based on scheduling
criteria
Explain the issues related to multiprocessor and
multicore scheduling
Operating System Concepts – 10th Edition 5.3 Silberschatz, Galvin and Gagne
Basic Concepts
Maximum CPU utilization
obtained with
multiprogramming
CPU–I/O Burst Cycle –
Process execution consists
of a cycle of CPU execution
and I/O wait
CPU burst followed by I/O
burst
CPU burst distribution is of
main concern
Operating System Concepts – 10th Edition 5.4 Silberschatz, Galvin and Gagne
Histogram of CPU-burst Times
Large number of short bursts
Small number of longer bursts
Operating System Concepts – 10th Edition 5.5 Silberschatz, Galvin and Gagne
CPU Scheduler
The CPU scheduler selects from among the processes
in ready queue, and allocates a CPU core to one of
them
• Queue may be ordered in various ways
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state(I/O
request)
2. Switches from running to ready state(Interrupt)
3. Switches from waiting to ready(Completion of I/O)
4. Terminates
For situations 1 and 4, there is no choice in terms of
scheduling. A new process (if one exists in the ready
queue) must be selected for execution.
For situations 2 and 3, however, there is a choice.
Operating System Concepts – 10th Edition 5.6 Silberschatz, Galvin and Gagne
Preemptive and Nonpreemptive Scheduling
When scheduling takes place only under
circumstances 1 and 4, the scheduling scheme is
nonpreemptive.
Otherwise, it is preemptive.
Under Nonpreemptive scheduling, once the CPU
has been allocated to a process, the process
keeps the CPU until it releases it either by
terminating or by switching to the waiting state.
Virtually all modern operating systems including
Windows, MacOS, Linux, and UNIX use preemptive
scheduling algorithms.
Operating System Concepts – 10th Edition 5.7 Silberschatz, Galvin and Gagne
Preemptive Scheduling and Race Conditions
Preemptive scheduling can result in race
conditions when data are shared among
several processes.
Consider the case of two processes that
share data. While one process is updating the
data, it is preempted so that the second
process can run. The second process then
tries to read the data, which are in an
inconsistent state.
Operating System Concepts – 10th Edition 5.8 Silberschatz, Galvin and Gagne
Dispatcher
Dispatcher module gives control
of the CPU to the process
selected by the CPU scheduler;
this involves:
• Switching context
• Switching to user mode
• Jumping to the proper
location in the user program
to restart that program
Dispatch latency – time it takes for
the dispatcher to stop one
process and start another
running
Operating System Concepts – 10th Edition 5.9 Silberschatz, Galvin and Gagne
Scheduling Criteria
CPU utilization – keep the CPU as busy as
possible
Throughput – # of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has
been waiting in the ready queue
Response time – amount of time it takes from
when a request was submitted until the first
response is produced.
Operating System Concepts – 10th Edition 5.10 Silberschatz, Galvin and Gagne
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Operating System Concepts – 10th Edition 5.11 Silberschatz, Galvin and Gagne
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Operating System Concepts – 10th Edition 5.12 Silberschatz, Galvin and Gagne
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound
processes
Operating System Concepts – 10th Edition 5.13 Silberschatz, Galvin and Gagne
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its
next CPU burst
• Use these lengths to schedule the process
with the shortest time
SJF is optimal – gives minimum average waiting
time for a given set of processes
Preemptive version called shortest-remaining-time-
first
How do we determine the length of the next CPU
burst?
• Could ask the user
• Estimate
Operating System Concepts – 10th Edition 5.14 Silberschatz, Galvin and Gagne
Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Operating System Concepts – 10th Edition 5.15 Silberschatz, Galvin and Gagne
Determining Length of Next CPU Burst
Can only estimate the length – should be similar to the
previous one
• Then pick process with shortest predicted next CPU
burst
Can be done by using the length of previous CPU bursts,
using exponential averaging
Commonly, α set to ½
Operating System Concepts – 10th Edition 5.16 Silberschatz, Galvin and Gagne
Prediction of the Length of the Next CPU Burst
Operating System Concepts – 10th Edition 5.17 Silberschatz, Galvin and Gagne
Examples of Exponential Averaging
=0
• n+1 = n
• Recent history does not count
=1
• n+1 = tn
• Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1, each
successor predecessor term has less weight than its
predecessor
Operating System Concepts – 10th Edition 5.18 Silberschatz, Galvin and Gagne
Shortest Remaining Time First Scheduling
Preemptive version of SJN
Whenever a new process arrives in the ready
queue, the decision on which process to
schedule next is redone using the SJN algorithm.
Is SRT more “optimal” than SJN in terms of the
minimum average waiting time for a given set of
processes?
Operating System Concepts – 10th Edition 5.19 Silberschatz, Galvin and Gagne
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and
preemption to the analysis
Process i Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4
= 6.5
Operating System Concepts – 10th Edition 5.20 Silberschatz, Galvin and Gagne
Round Robin (RR)
Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and
added to the end of the ready queue.
If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once.
No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next
process
Performance
• q large FIFO (FCFS)
• q small RR
Note that q must be large with respect to context
switch, otherwise overhead is too high
Operating System Concepts – 10th Edition 5.21 Silberschatz, Galvin and Gagne
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
3 P3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but
better response
q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds
Operating System Concepts – 10th Edition 5.22 Silberschatz, Galvin and Gagne
Time Quantum and Context Switch Time
Operating System Concepts – 10th Edition 5.23 Silberschatz, Galvin and Gagne
Turnaround Time Varies With The Time Quantum
80% of CPU bursts
should be shorter than q
Operating System Concepts – 10th Edition 5.24 Silberschatz, Galvin and Gagne
Priority Scheduling
A priority number (integer) is associated with each
process
The CPU is allocated to the process with the highest
priority (smallest integer highest priority)
• Preemptive
• Nonpreemptive
SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
Problem Starvation – low priority processes may never
execute
Solution Aging – as time progresses increase the
priority of the process
Operating System Concepts – 10th Edition 5.25 Silberschatz, Galvin and Gagne
Example of Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
Average waiting time = 8.2
Operating System Concepts – 10th Edition 5.26 Silberschatz, Galvin and Gagne
Priority Scheduling w/ Round-Robin
Run the process with the highest priority. Processes
with the same priority run round-robin
Example:
Process a Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
Gantt Chart with time quantum = 2
Operating System Concepts – 10th Edition 5.27 Silberschatz, Galvin and Gagne
Multilevel Queue
The ready queue consists of multiple queues
Multilevel queue scheduler defined by the following
parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a
process will enter when that process needs
service
• Scheduling among the queues
Operating System Concepts – 10th Edition 5.28 Silberschatz, Galvin and Gagne
Multilevel Queue
With priority scheduling, have separate queues for each
priority.
Schedule the process in the highest-priority queue!
Operating System Concepts – 10th Edition 5.29 Silberschatz, Galvin and Gagne
Multilevel Queue
Prioritization based upon process type
Operating System Concepts – 10th Edition 5.30 Silberschatz, Galvin and Gagne
Multilevel Feedback Queue
A process can move between the various queues.
Multilevel-feedback-queue scheduler defined by the
following parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine when to upgrade a
process
• Method used to determine when to demote a
process
• Method used to determine which queue a process
will enter when that process needs service
Aging can be implemented using multilevel feedback
queue
Operating System Concepts – 10th Edition 5.31 Silberschatz, Galvin and Gagne
Example of Multilevel Feedback Queue
Three queues:
• Q0 – RR with time quantum 8
milliseconds
• Q1 – RR time quantum 16
milliseconds
• Q2 – FCFS
Scheduling
• A new process enters queue Q0
which is served in RR
When it gains CPU, the process
receives 8 milliseconds
If it does not finish in 8
milliseconds, the process is moved
to queue Q1
• At Q1 job is again served in RR
and receives 16 additional
milliseconds
If it still does not complete, it is
preempted and moved to queue Q2
Operating System Concepts – 10th Edition 5.32 Silberschatz, Galvin and Gagne
Motivation- Threads
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be
implemented by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
Process creation is heavy-weight while thread
creation is light-weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
Operating System Concepts – 10th Edition 5.33 Silberschatz, Galvin and Gagne
Single and Multithreaded Processes
Operating System Concepts – 10th Edition 5.34 Silberschatz, Galvin and Gagne
Benefits
Responsiveness – may allow continued execution if
part of process is blocked, especially important for
user interfaces
Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
Economy – cheaper than process creation, thread
switching lower overhead than context switching
Scalability – process can take advantage of
multicore architectures
Operating System Concepts – 10th Edition 5.35 Silberschatz, Galvin and Gagne
Multicore Programming
Multicore or multiprocessor systems puts pressure on
programmers, challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
Parallelism implies a system can perform more than one
task simultaneously
Concurrency supports more than one task making
progress
• Single processor / core, scheduler providing
concurrency
Operating System Concepts – 10th Edition 5.36 Silberschatz, Galvin and Gagne
Concurrency vs. Parallelism
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Operating System Concepts – 10th Edition 5.37 Silberschatz, Galvin and Gagne
Multicore Programming
Types of parallelism
• Data parallelism – distributes subsets of the same
data across multiple cores, same operation on
each
• Task parallelism – distributing threads across
cores, each thread performing unique operation
Operating System Concepts – 10th Edition 5.38 Silberschatz, Galvin and Gagne
Data and Task Parallelism
Operating System Concepts – 10th Edition 5.39 Silberschatz, Galvin and Gagne
Amdahl’s Law
Identifies performance gains from adding additional cores
to an application that has both serial and parallel
components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving
from 1 to 2 cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate
effect on performance gained by adding additional cores
Operating System Concepts – 10th Edition 5.40 Silberschatz, Galvin and Gagne
User Threads and Kernel Threads
User threads - management done by user-level threads
library
Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
Kernel threads - Supported by the Kernel
Examples – virtually all general-purpose operating
systems, including:
• Windows
• Linux
• Mac OS X
• iOS
• Android
Operating System Concepts – 10th Edition 5.41 Silberschatz, Galvin and Gagne
User and Kernel Threads
Operating System Concepts – 10th Edition 5.42 Silberschatz, Galvin and Gagne
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Operating System Concepts – 10th Edition 5.43 Silberschatz, Galvin and Gagne
Many-to-One
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on multicore
system because only one may be in kernel at a time
Few systems currently use this model
Examples:
• Solaris Green Threads
• GNU Portable Threads
Operating System Concepts – 10th Edition 5.44 Silberschatz, Galvin and Gagne
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted
due to overhead
Examples
• Windows
• Linux
Operating System Concepts – 10th Edition 5.45 Silberschatz, Galvin and Gagne
Many-to-Many Model
Allows many user level threads to be mapped to many
kernel threads
Allows the operating system to create a sufficient
number of kernel threads
Windows with the ThreadFiber package
Otherwise not very common
Operating System Concepts – 10th Edition 5.46 Silberschatz, Galvin and Gagne
Two-level Model
Similar to M:M, except that it allows a user thread to be
bound to kernel thread
Operating System Concepts – 10th Edition 5.47 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are
available
Multiprocess may be any one of the following
architectures:
• Multicore CPUs
• Multithreaded cores
• NUMA systems
• Heterogeneous multiprocessing
Operating System Concepts – 10th Edition 5.48 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling
Symmetric multiprocessing (SMP) is where each
processor is self scheduling.
All threads may be in a common ready queue (a)
Each processor may have its own private queue of
threads (b)
Operating System Concepts – 10th Edition 5.49 Silberschatz, Galvin and Gagne
Multicore Processors
Recent trend to place multiple processor cores on same
physical chip
Faster and consumes less power
Multiple threads per core also growing
• Takes advantage of memory stall to make progress
on another thread while memory retrieve happens
Figure
Operating System Concepts – 10th Edition 5.50 Silberschatz, Galvin and Gagne
Multithreaded Multicore System
Each core has > 1 hardware threads.
If one thread has a memory stall, switch to another
thread!
Figure
Operating System Concepts – 10th Edition 5.51 Silberschatz, Galvin and Gagne
Multithreaded Multicore System
Chip-multithreading
(CMT) assigns each core
multiple hardware
threads. (Intel refers to
this as hyperthreading.)
On a quad-core system
with 2 hardware threads
per core, the operating
system sees 8 logical
processors.
Operating System Concepts – 10th Edition 5.52 Silberschatz, Galvin and Gagne
Multithreaded Multicore System
Two levels of
scheduling:
1. The operating
system deciding
which software
thread to run on a
logical CPU
2. How each core
decides which
hardware thread
to run on the
physical core.
Operating System Concepts – 10th Edition 5.53 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling – Load Balancing
If SMP, need to keep all CPUs loaded for efficiency
Load balancing attempts to keep workload evenly
distributed
Push migration – periodic task checks load on each
processor, and if found pushes task from overloaded
CPU to other CPUs
Pull migration – idle processors pulls waiting task from
busy processor
Operating System Concepts – 10th Edition 5.54 Silberschatz, Galvin and Gagne
Multiple-Processor Scheduling – Processor Affinity
When a thread has been running on one processor, the
cache contents of that processor stores the memory
accesses by that thread.
We refer to this as a thread having affinity for a
processor (i.e., “processor affinity”)
Load balancing may affect processor affinity as a thread
may be moved from one processor to another to balance
loads, yet that thread loses the contents of what it had in
the cache of the processor it was moved off of.
Soft affinity – the operating system attempts to keep a
thread running on the same processor, but no
guarantees.
Hard affinity – allows a process to specify a set of
processors it may run on.
Operating System Concepts – 10th Edition 5.55 Silberschatz, Galvin and Gagne
NUMA and CPU Scheduling
If the operating system is NUMA-aware, it will assign
memory closes to the CPU the thread is running on.
Operating System Concepts – 10th Edition 5.56 Silberschatz, Galvin and Gagne