Chapter 6: CPU Scheduling
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Objectives
To introduce CPU scheduling, which is the basis for multiprogrammed operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
To examine the scheduling algorithms of several operating systems
Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013
Basic Concepts
Maximum CPU utilization obtained with
multiprogramming
CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and
I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern
Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Histogram of CPU-burst Times
Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
Short-term scheduler selects from among the processes in ready
queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
Dispatcher
Dispatcher module gives control of the CPU to the process selected
by the short-term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one
process and start another running
Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria
1. CPU utilization – keep the CPU as busy as possible
2. Throughput – # of processes that complete their execution per time unit
3. Turnaround time – amount of time to execute a particular process
4. Waiting time – amount of time a process has been waiting in the ready
queue
5. Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-sharing
environment)
Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
There is a convoy effect as all the other processes wait for the one big
process to get off the CPU. This effect results in lower CPU and device
utilization than might be possible if the shorter processes were allowed to
go first.
Note also that the FCFS scheduling algorithm is nonpreemptive. Once
the CPU has been allocated to a process, that process keeps the CPU
until it releases the CPU, either by terminating or by requesting I/O.
The FCFS algorithm is thus particularly troublesome for time-sharing
systems, where it is important that each user get a share of the CPU at
regular intervals
Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling
When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.
If the next CPU bursts of two processes are the same, FCFS scheduling is used to
break the tie.
So this algorithm associate with each process the length of its next CPU
burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of
processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013
Determining Length of Next CPU Burst
The real difficulty with the SJF algorithm is knowing the length of the next
CPU request
With short-term scheduling, there is no way to know the length of the next
CPU burst but we may be able to predict its value.
We expect that the next CPU burst will be similar in length to the previous
ones. By computing an approximation of the length of the next CPU burst,
we can pick the process with the shortest predicted CPU burst.
Can be done by using the length of previous CPU bursts, using
exponential averaging
Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Shortest-remaining-time-first
Preemptive SJF scheduling is sometimes called shortest-remaining-time-
first scheduling.
Now we add the concepts of varying arrival times and preemption to the
analysis.
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec
Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer
highest priority)
Preemptive
Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted next CPU
burst time i.e The larger the CPU burst, the lower the priority, and vice versa.
There is no general agreement on whether 0 is the highest or lowest priority.
We assume that low numbers represent high priority.
Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
P2 P5 P1 P3 P4
0 1 6 16 18 19
Average waiting time = 8.2 msec
Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Factors Effecting Priority of a Process
Priorities can be defined either internally or externally.
Internally: For example
1. Time limits
2. Memory requirements
3. The number of open files etc
External priorities: are set by criteria outside the operating system, such as
1. The importance of the process
2. The type and amount of funds being paid for computer use
3. The other, often political, factors.
Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Major Problems Priority scheduling algorithms
A major problem with priority scheduling algorithms is indefinite blocking,
or starvation.
A priority scheduling algorithm can leave some low priority processes
waiting indefinitely. In a heavily loaded computer system, a steady stream of
higher-priority processes can prevent a low-priority process from ever
getting the CPU.
Rumor has it that when they shut down the IBM 7094 at MIT in 1973, they
found a low-priority process that had been submitted in 1967 and had not
yet been run.
Needs a solution ?
Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013
Factors Effecting Priority of a Process
The solution of the indefinite blockage of low-priority processes is aging.
Aging involves gradually increasing the priority of processes that wait in
the system for a long time.
For example, if priorities range from 127 (low) to 0 (high), we could
increase the priority of a waiting process by 1 every 15 minutes.
Eventually, even a process with an initial priority of 127 would have the
highest priority in the system and would be executed.
Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)
The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems. It is similar to FCFS scheduling, but preemption is
added to enable the system to switch between processes.
Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted and
added to the end of the ready queue.
To implement RR scheduling, we again treat the ready queue as a FIFO
queue of processes. New processes are added to the tail of the ready
queue. The CPU scheduler picks the first process from the ready queue,
sets a timer to interrupt after 1 time quantum, and dispatches the process.
Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)
One of two things will then happen.
The process may have a CPU burst of less than 1 time quantum. In this case,
the process itself will release the CPU voluntarily.
If the CPU burst of the currently running process is longer than 1 time quantum,
the timer will go off and will cause an interrupt to the operating system. A
context switch will be executed, and the process will be put at the tail of the
ready queue.
Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Example of RR with Time Quantum = 4
Consider the following set of processes that arrive at time 0, with the length
of the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Facts about Round Robin (RR)
In the RR scheduling algorithm, no process is allocated the CPU for more
than 1 time quantum in a row (unless it is the only runnable process).
If there are n processes in the ready queue and the time quantum is q, then
each process gets 1/n of the CPU time in chunks of at most q time units.
Each process must wait no longer than (n − 1) × q time units until its next
time quantum.
The performance of the RR algorithm depends heavily on the size of the
time quantum. At one extreme, if the time quantum is extremely large, the
RR policy is the same as the FCFS policy. In contrast, if the time quantum is
extremely small (say, 1 millisecond), the RR approach can result in a large
number of context switches.
Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time
Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With
The Time Quantum
Process Burst Time
P1 10 8 6 4 2 0
P2 10 8 6 4 2 0
P3 10 8 6 4 2 0
P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P2 P3
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
Waiting time P1 = 16 msec Turn around time P1 = 16 + 10 = 26 msec
Waiting time P2 = 18 msec Turn around time P2 = 18 + 10 = 28 msec
Waiting time P3 = 20 msec Turn around time P3 = 20 + 10 = 30 msec
Average waiting time = 18 msec Average turnaround time = 28 msec
Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With
The Time Quantum
Process Burst Time
P1 10
P2 10
P3 10
P1 P2 P3
0 10 20 30
Waiting time P1 = 0 msec Turn around time P1 = 00 + 10 = 10 msec
Waiting time P2 = 10 msec Turn around time P2 = 10 + 10 = 20 msec
Waiting time P3 = 20 msec Turn around time P3 = 20 + 10 = 30 msec
Average waiting time = 10 msec Average turnaround time = 20 msec
Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With
The Time Quantum
If context-switch time is added in, the average turnaround time increases
even more for a smaller time quantum, since more context switches are
required.
Although the time quantum should be large compared with the
contextswitch time, it should not be too large. As we pointed out earlier, if
the time quantum is too large, RR scheduling degenerates to an FCFS
policy
Operating System Concepts – 9th Edition 6.30 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue
Ready queue is partitioned into separate queues, eg:
foreground (interactive)
background (batch)
Process permanently reside in a given queue, generally based on some
property of the process, such as memory size, process priority, or
process type.
Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS
Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue Scheduling
Multilevel queue scheduling algorithm with five queues, listed below
in order of priority:
Operating System Concepts – 9th Edition 6.32 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue
Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR, 20% to
background in FCFS
Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne ©2013
Multilevel Feedback Queue
A process can move between the various queues; aging can be
implemented this way
The idea is to separate processes according to the characteristics of
their CPU bursts. If a process uses too much CPU time, it will be moved
to a lower-priority queue
Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne ©2013
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved to queue Q1
At Q1 job is again served FCFS and receives 16 additional
milliseconds
If it still does not complete, it is preempted and moved to queue
Q2
Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne ©2013
Thread Scheduling
On operating systems that support threads, kernel-level threads are
being scheduled by the operating system , not processes.
User-level threads are managed by a thread library, and the kernel is
unaware of them
Operating System Concepts – 9th Edition 6.36 Silberschatz, Galvin and Gagne ©2013
Homework
Write answers to exercise question numbers from 6.1
to 6.27.
If your roll number last digit is odd, then write answers
to odd questions and if its even then write answer to
even number question.
Only handwritten neat and clean homework will be
accepted.
Page size and page quality through out the course
should remain the same.
Operating System Concepts – 9th Edition 6.37 Silberschatz, Galvin and Gagne ©2013
End of Chapter 6
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013