Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views224 pages

OS Module 2 Process Management

Uploaded by

veda.23bce7240
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views224 pages

OS Module 2 Process Management

Uploaded by

veda.23bce7240
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 224

Contents

1. Program and process


2. Process state diagram
3. Operations of process
4. Interprocess Communication
5. Concurrency
6. Multithreading
7. Process scheduling

2
Process
Process is a program in execution

Process execution must progress in sequential fashion

Program is a passive entity stored on disk (executable file), process is active entity

Program becomes process when executable file loaded into memory

Executable program can be loaded into memory

1. By mouse click
2. By command line

One program can be several processes

- Consider a program being executed by multiple users

3
Process

Processor RAM HDD

● A program is written using programming languages like Java, Python etc.


● The program is also referred to as source code
● After compilation of source code a byte code (executable) is created
● Bytecode is initially placed on disk like HDD/SSD

4
Process

Processor RAM HDD

● When user run executable file using mouse or command line interface,
executable and data related to program brought into RAM
● Memory gets allocated to code and data in RAM

5
Process

Processor RAM HDD

● Processor reads the data and instructions from memory and perform
computations.
● Read/write happens on main memory.

6
Process

Processor RAM HDD

● After program execution source code and data related to code is


removed from memory
● The executable remains on HDD/SSD

7
A C program execution
Here it is a Here it is a
Processor
Program i.e Process i.e active
passive entity entity

C program Compile Executable Data

HDD RAM 8
Process structure
1. Int a,b,c,Result
2. Int call_func(int a, int b)
3. { A process is program under execution
4. Int x = a*b A process is more than program code
5. Int y = a-b It also includes
6. Return (y+x) ● Process ID
7. } ● program counter
8. Int main() ● registers
9. { ● state
10. Read a,b,c ● open files pointers
11. Result = Call_func(a,b,c) ● children
12. print(Results) ● Current directory, etc.
13. Return main
14. }
9
Process in Memory
1. Int a,b,c,Result Max Stack
Process stack
2. Int call_func(int a, int b) contains
3. { Heap is the memory
Temporary data such
which is dynamically
4. Int x = a*b asData section
function which
allocated
A during
program
5. Int y = a-b parameters, return is
code
contains global
process runtime
storedand
variablesin alocal
text
address,
6. Return (y+x) section
variables
7. }
8. Int main() Heap
9. {
Data A program counter
10. Read a,b,c,Result register keep track
11. Result = Call_func(a,b) current instruction being
0 Text
12. print(Results) executed and processor
13. Return main registers holds the
14. } variable values
10
Memory layout of a C program
#include<stdio.h>
#include<stdlib.h>
Argc, argv
int x;
Stack int y = 15;
int main(int argc, int *argv[])
{
int *value;
int i;
values = (int *)malloc(sizeof(int) * 5);
Heap for(int i = 0; i < 5; i++)
uninitialized data {
values[i] = i;
Initialized data }
return 0;
Text
}
11
Process states

New The Process is being created

Ready The process is waiting to be assigned to processor

Running Instructions are being executed

Waiting The process is waiting for some event to occur

Terminated The process completes its execution


12
Process state diagram
Admitted Interrupt
New Terminated

Ready Running
Exit

Scheduler dispatch
IO event
IO or event wait
completion
Waiting

13
Process state diagram
New

● When user clicks or run executable, data and source code of the
program copied from disk into main memory after memory gets
allocated.
● A process is just created and not assigned with any CPU resources

14
Process state diagram
Admitted
New

Ready

● After allocation of memory process is ready to execute


● Process is placed into ready queue by a long term scheduler
● A process will get chance to CPU resource based on short term
scheduler
● Process may not get CPU for execution immediately as other process
running on CPU 15
Process state diagram
Admitted
New

Ready Running

Scheduler dispatch

● When scheduler (short term) decides to execute the process, process


moves to running state
● In running state process got the CPU and its executing
16
Process state diagram
Admitted
New

Ready Running

Scheduler dispatch
IO or event wait
Waiting

● During process execution process may expect IO operation such as


read input from the user from keyboard
● Users are much slower as compared to processor
● Process should be kept in waiting state so that other processes get
17
chance for CPU
Process state diagram
Admitted
New

Ready Running

Scheduler dispatch
IO event
IO or event wait
completion
Waiting

● On completion of IO, process will again kept in ready state


● Short term scheduler takes decision to keep process into running
state again.
18
Process state diagram
Admitted Interrupt
New

Ready Running

Scheduler dispatch
IO event
IO or event wait
completion
Waiting

● Sometimes process is given with time quanta to execute on CPU


● After completion oftime quanta process gets interrupte and removed
from running state to waiting state
19
Process state diagram
Admitted Interrupt
New Terminated

Ready Running
Exit

Scheduler dispatch
IO event
IO or event wait
completion
Waiting

● After completion of process execution it moves to terminated state

20
Process Control Block (PCB)
● Every process has Process Control Block (PCB) and Information
associated with a process is stored in PCB
1. Process state - Ready, running, terminated, etc.
2. Process ID - Unique identification number assigned when process gets
created.
3. Program counter - Location of the next instruction to be executed
4. CPU registers - Contents of all process centric registers.
5. CPU scheduling information - Priorities, scheduling queue pointers, etc.
6. Memory management information - Memory allocation to process in
code, data, stack and heap segment and their limits.
7. Accounting information - CPU used, clock time elapsed since start, time
limits.
8. I/O status information - I/O devices allocated to process, list of open
files, etc.
21
Process representation in Linux
● Represented by C structure task_struct
long state; /*denote state of the process */

struct sched entity se ; /*denote scheduling information */

struct task struct *parent; /*denotes this process’s parent */

struct list head children ; /*denotes this process’s children */

struct files struct *files; /* denotes list of open files */

struct mm struct *mm; /* denotes address space of this process */

struct task struct *p_opptr,*p_pptr,*p_cptr,*p_ysptr,*p_osptr

/*denotes, op = original parent, p = parent, c = youngest child, ys = youngest


sibling, os = older sibling */

22
Process representation in Linux
Process Table

PID PCB

3 PID PID PCB

4 PC PC

Status Status
PCB

23
Process Scheduling
● At any point of time number of processes are ready for the
execution
● Operating system has to decide which process to be executed
next
● The main objective of process scheduling is to maximize CPU
utilization.
● Process “gives” up the CPU under two conditions, IO request and
after N units of time have elapsed
● Once process “gives” up the CPU it is added to the ready queue
● Process scheduler selects among available processes in the ready
queue for the next execution on CPU.

24
Process Scheduling Queues
● OS maintains scheduling queues of process

1. Job queue - Set of all processes in the system


2. Ready queue - set of processes residing in the main memory,
ready and waiting to execute.
3. Device queue - Set of processes waiting for an IO device

Processes migrate among various queues

25
Process Scheduling Queues
Queuing Diagram
Ready Queue CPU

I/O I/O Queue I/O request

Time slice expired

Child
Fork a child
executes

Interrupt Wait for an


occurs interrupt
26
Process - Context switching
● When CPU switches to another process, the system must save the
state of the old process and load the saved state of new process
via a context switch
● Context of the process is represented by PCB
● Context switch time is pure overhead, the system does no useful
work while switching
● More complex OS and PCB implies more overheads of switch
● Time depends on hardware support - some hardware provides
multiple set of registers so that multiple process contexts
loaded/stored

27
Process - Context switching
Process P0 Operating Process P1
System
Executing Interrupt or system call

Save state into PCB0 Idle

Reload state from PCB1


Idle
Interrupt or system call Executing

Save state into PCB1

Idle
Reload state from PCB0
Executing
28
CPU bound vs IO bound process
● A CPU-bound process requires more ● An IO bound process requires less
CPU time CPU time
● Spends more time in the running ● An I/O-bound process spends more
state. time in the waiting state.
● CPU bound means the program is ● I/O bound means the program is
bottlenecked by the CPU bottlenecked by the IO operations
● Spends more time doing ● Spends more time on IO
computations ● I/O bound operations are
● In a CPU-bound environment, most characterized by many and fewer
times, the processor is the only CPU bursts during execution
component being used for execution.
Long IO operation
Short IO burst Short CPU burst

Time -> Time -> 29


Schedulers
1. Selects which process to be executed next
and allocates CPU
2. Sometimes only scheduler
3. Short term scheduler is invoked frequently
(in milliseconds), must be invoked very fast
Short term 4. The choices of the short term scheduler
Scheduler are very important. If it selects a process
with a long burst time, then all the
processes after that will have to wait for a
long time in the ready queue.
5. This is known as starvation and it may
happen if a wrong decision is made by the
short-term scheduler.

30
Schedulers
1. Selects the processes from the storage
pool in the secondary memory and
loading them into the ready queue in
the main memory for execution.
2. The long-term scheduler controls the
Long term degree of multiprogramming.
Scheduler 3. It must select a careful mixture of I/O
bound and CPU bound processes to
yield optimum system throughput.
4. If it selects too many CPU bound
processes then the I/O devices are idle
and if it selects too many I/O bound
processes then the processor has
nothing to do.

31
Schedulers

1. Medium-term scheduling involves


swapping out a process from main
memory.
Medium term 2. The process can be swapped in later
Scheduler from the point it stopped executing.
3. This can also be called as
suspending and resuming the
process and is done by the
medium-term scheduler.

32
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process


executable running
mode

Kernel
mode

33
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process


executable running
mode

Kernel
mode

34
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process


executable running
mode

Kernel
mode

35
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process Calls system call


executable running write()
mode

Kernel
mode

36
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 0
int w = z/2;

User User process Calls system call


executable running write()
mode

Set mode bit = 0


Kernel
mode

37
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 0
int w = z/2;

User User process Calls system call


executable running write()
mode

Set mode bit = 0


Kernel
mode Execute system call
write()
38
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process Calls system call


executable running write()
mode

Set mode bit = 0 Set mode bit = 1


Kernel
mode Execute system call
write()
39
User mode and kernel mode of execution

int x = 5;
int y = 7;
int z = x+y;
printf(“z = %d”,z); Mode = 1
int w = z/2;

User User process Calls system call User process


executable running write() executable running
mode

Set mode bit = 0 Set mode bit = 1


Kernel
mode Execute system call
write()
40
System Call
● A system call is the programmatic way in which a computer program
requests a service from the kernel of the operating system it is
executed on.
● A system call is a way for programs to interact with the operating
system.
● A computer program makes a system call when it makes a request to
the operating system’s kernel.
● System call provides the services of the operating system to the user
programs via Application Program Interface(API).

41
System Call
● For example, the C programming language gives you printf() that lets you
write data in many different formats.
● So, printf() is a function that convert data into a formatted sequence of
bytes.
● Then it calls write() to write those bytes onto the output device.
● Application programs don’t have direct access to system calls, it requests
OS provide access to the call

printf() in C
Cout in C++ write()
system.out.println() in Java
print() in Python
Program OS
42
Process Creation
● A process may create other processes
● Parent process creates children processes, which in turn create other
processes, forming a tree of processes
● Generally a process is identified and managed via process identifier (PID)
Init
Pid = 1

login sshd
Pid = 1234 Pid = 3451

bash Sshd
Pid = 1434 Pid = 8976

A tree of processes in Unix


43
Process Creation
● Fork system call is used for creating a new process, which is called child
process, which runs concurrently with the process that makes the fork()
call (parent process).
● After a new child process is created, both processes will execute the next
instruction following the fork() system call.
● A child process uses the same pc(program counter), same CPU registers,
same open files which use in the parent process.
● It takes no parameters and returns an integer value. Below are different
values returned by fork().
Negative Value: creation of a child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: Returned to parent or caller. The value contains process ID
of newly created child process. 44
Process Creation - fork()

fork()
P0 P1

Stack
Stack
Both processes P0 and P1
share the address space and
P1 also executes same
program.
Heap
Heap
Data
Data
COPY Text
Text

Parent process Child process


address space address space
45
Process Creation - fork() program
1. #include <stdio.h> 1. #include <stdio.h>
2. #include <sys/types.h> 2. #include <sys/types.h>
3. #include <unistd.h> 3. int main()
4. int main() 4. {
5. { 5. fork();
6. // make two process which run same
6. fork();
7. // program after this instruction
7. fork();
8. fork();
9. printf("Hello world!\n"); 8. printf("hello\n");
10. return 0; 9. return 0;
11. } 10. }
Output: - Output?
Hello world!
Hello world!

46
Process Creation - fork() and exec()
● fork() system call is used to create a new process
● exec() system call is used after a fork() replaces the process’s memory
space with a new program
● Using fork() a child is duplicate of the parents address space
● Using fork()A child loads program into address space

Wait

Fork

Exec() Exit()
47
Process Creation - fork() and exec()
fork() then exec()
P0 P1

Stack
Stack
Both processes P0 and P1
have separate address
space and P1 now executes
completely different
Heap
Heap program. P1 overwrite
address space copied from Data
Data
P0. Text
Text

Parent process Child process


address space address space
48
Process Creation - Exec() program
1. //EXEC.c 1. #include<stdio.h>
2. #include<stdio.h> 2. #include<stdlib.h>
3. #include<unistd.h>
4. int main()
3. #include<unistd.h>
5. { 4. int main()
6. int i; 5. {
7. printf("I am EXEC.c called by 6. //A null terminated array of character
execvp() ");
7. //pointers
8. printf("\n");
9. return 0; 8. char *args[]={"./EXEC",NULL};
10. } 9. execvp(args[0],args);
10. printf("Ending-----");
11. return 0;
COMPILE AND CREATE EXECUTABLE
12. }
gcc EXEC.c -o EXEC
Compile and Run this program

gcc execDemo.c -o execDemo

49
Process Termination
● A process terminates when it executes its final statement and ask
OS to delete it by using exit() system call
● All the resources allocated to process gets deallocated by the
operating system
● A parent may terminate the execution of child process using
abort system call, some reasons to do so are
1. Child has exceeded allocated resources
2. Task assigned to child is no longer required
3. The parent is exiting and OS does not allow a child to continue if
the parent terminates.

50
Process Termination
1. A parent process may wait till child process terminates using
wait() system call. The call returns status and pid of terminated
process.

pid = wait(&status);

2. A process that has finished the execution but still has an entry in
the process table called as Zombie process
3. An orphan process is a computer process whose parent process
has finished or terminated, though it (child process) remains
running itself.

51
Process Termination - wait() example
// C program to demonstrate working of wait()
#include<stdio.h>
#include<sys/wait.h>
#include<unistd.h>
int main() This will never get printed before
{ HC
if (fork()== 0)
printf("HC: hello from child\n");
else
{
printf("HP: hello from parent\n");
wait(NULL);
printf("CT: child has terminated\n");
}

printf("Bye\n");
return 0;
}

52
Inter Process Communication (IPC)
1. Processes in the system may be independent or cooperating.
2. Cooperating processes may affect or be affected by other
processes including sharing data.
3. Independent processes cannot affect other processes.

Reasons for having cooperating processes:-

1. Information sharing
2. Computation speedup (multiple processes running in parallel)
3. Modularity
4. Convenience

53
Two models of IPC
Process A Process A

Shared Memory Process B

Process B

Message Queue
M0 M1 M2 M3 … Mn

Kernel Kernel

Shared Memory Message Passing


54
Two models of IPC
Process A

Shared Memory ● Processes communicate by read/write on


shared memory
Process B ● Multiple processes writing/reading from
same memory location may result in data
inconsistency
● To ensure data consistency, a proper
synchronization is required

Kernel

Shared Memory
55
Two models of IPC

Process A
● Processes communicate by sending or
Process B receiving messages
● IPC facility provide two operations
1. send (message)
2. receive (message)
Message Queue
● Message size either fixed or variable
● If process P and Q wants to communicate
M0 M1 M2 M3 … Mn
- Establish a communication link between
them
- Exchange messages via send/receive
Kernel

56
Message Passing
Pipe
● A pipe is a connection between two processes, such that the standard
output from one process becomes the standard input of the other
process.
● In UNIX Operating System, Pipes are useful for communication between
related processes (inter-process communication).
● Pipe is one-way communication only i.e we can use a pipe such that One
process write to the pipe, and the other process reads from the pipe.
● If a process tries to read before something is written to the pipe, the
process is suspended until something is written.

57
Pipe
Process Refer pipe1.c from programs
folder

write() P[ 1 ]
int pipe(int fds[2]);

Parameters :
fd[0] will be the fd(file descriptor) for the
read end of pipe.
fd[1] will be the fd for the write end of
pipe.
Returns : 0 on Success. -1 on error.
read() P[ 0 ]

58
Threads

59
Web server example

● If we handle one user at a time it


may lead to starvation to others
● If for each user a separate process
is created:-
● Each process contains
data,text,stack and heap region
● If multiple processes gets created
it becomes very heavy to operate

60
Web server
Motivation for threads

Let's say there are multiple tasks with a


computer application
1. Update display
2. Fetch data from web
3. Spell check
4. Answer to a network request

61
Motivation for threads

P1 P2

Fetch data Spell Checking

● Two separate processes created for each


operation
● Process creation is heavy weight - code, data,
stack segments, hence switching is also slow.
● If parent child relation between processes then
shared memory needs to be created which is
limited 62
Single vs multi threaded process

Code Data Files Code Data Files

Registers Stack Registers Registers Registers

Stack Stack Stack

Single threaded process Multi-threaded process 63


Multi-threaded server architecture

Create new thread to service Thread for client 1


request
SERVER

Thread for client 2

Thread for client 3

Client 1 Client 2 Client 3 64


Benefits of multi-threading

Responsiveness - May allow continued execution if part of process is


1 blocked, especially important for user interfaces

Resource sharing - Thread share resources of process easier than


2 shared memory and message passing IPC

Economy - Cheaper than process creation, thread switching


3 overheads are lesser than context switch process overhead

Scalability - process can take advantage of multiprocessor


4 architecture

65
Multi-core vs Multi-CPU

Core 0 Core 1

Local Local
Memory Memory
Shared Memory

Storage

● Multiple CPUs placed in a computer ● Multiple cores placed on a single


● Each processor is having its own processor chip
local memory Each core appears as separate CPU
for OS
66
Serial vs concurrency vs parallelism

Serial

● Host is talking with one guest at a time


● While host talking with one guest other
guests wait till his/her chance
● Host communicates with a guest till it
finishes, it doesn’t switch to another
guest in between
Host

67
Serial vs concurrency vs parallelism

Concurrent

● Host is talking with one guest at a time


for one minute
● While host talking with one guest other
guests wait till his/her chance
● Host switch to another guest after one
minute
Host ● Each guest time to communicate with
host for some time

68
Serial vs concurrency vs parallelism

Parallel

● Multiple host copies (ghost :-) ) are


available
● Each guest is communicating with host
without any intervention

Host

69
Serial vs concurrency vs parallelism

Concurrency +
Parallelism

● Multiple host copies (ghost :-) ) are


available
● Each guest is communicating with two
host concurrently

Host

70
Types of parallelism

Data Parallel:- Distributes subset of same data across multiple


cores and perform same operation on each

for(int i = 0; i < 100; i++)


{
A[i] = A[i] + 5; Core 0
}

Sequential:- All computations performed on core 0

71
Types of parallelism

Data Parallel:- Distributes subset of same data across multiple


cores and perform same operation on each

for(int i = 0; i < 25 i++)


{ Core 0 Core 1
for(int
A[i] = A[i]i += 5;
25; i < 50; i++)
} {
for(int
A[i] = A[i]i += 5;
50; i < 75; i++)
Core 2 Core 3
} {
A[i] = A[i]i += 5;
for(int 75; i < 100; i++)
} {
A[i] = A[i] + 5;
}

72
Types of parallelism

Task Parallel:- Distributes threads across cores, each core


performing unique operation.

Addition_floats();

Core 0 Core 1
Division_floats();

Subtract_floats(); Core 2 Core 3

Multiply_floats();

73
Multicore and multiprocessor programming Challenges on
programmer

● Dividing Activities:- Divide the tasks among threads/cores


● Balancing:- Each thread/core should get equal work
● Data splitting:- Each core gets only required data
● Data dependency:- Output of one thread’s execution is
input to other thread. Serializes the application.
● Testing and debugging:- Compare to sequential
programming testing and debugging is difficult

74
User threads and kernel threads

Support for threads also provided at two levels


● User threads - Supported above the kernel and are managed
without support of the kernel.
Generally handled by a programmer or by a threads library
● Kernel threads - Supported and managed directly by OS

75
Relationship between user and kernel threads

● Many-to-one
● One-to-one
● Many-to-many

76
Relationship between user and kernel threads

● Many-to-one
● One-to-one User threads
● Many-to-many User space

● Many user level


threads mapped to
one kernel thread
● One thread blocking
cause all to block Kernel space
● Multiple threads may
not run in parallel
because only one
may be in kernel at a
time Kernel threads

77
Relationship between user and kernel threads

● Many-to-one
● One-to-one User threads
● Many-to-many User space

● Whenever a new
thread is created in
user space, a new
thread is created in
kernel space Kernel space
● Each user thread,
kernel thread
handles system calls
● Less blocking at OS
level Kernel threads

78
Relationship between user and kernel threads

● Many-to-one
● One-to-one User threads
● Many-to-many User space

● Allows many user


threads mapped to
many kernel threads
● Allows OS to create
sufficient number of Kernel space
kernel threads

Kernel threads

79
Thread Libraries

● Thread libraries provide programmer with API creating


and managing threads
● Two primary ways of implementing
1. Library entirely in user space
2. Kernel library supported by OS
● Three primary thread libraries
1. POSIX Pthreads
2. Windows threads
3. Java threads Refer programs from “pthread
4. OpenMP and openMP” folder

80
CPU Scheduling

81
CPU Scheduling

Admitted Interrupt
New Terminated

Ready Running
Exit
Short term
Scheduler Scheduler dispatch
IO event
IO or event wait
completion
Waiting

82
Process execution Load store CPU burst
add store
● Goal of a scheduler is to maximize CPU read from file
utilization I/O burst
Wait for I/O
● A process continuously switch between
I/O and CPU burst store increment
● CPU burst is of main concern index CPU burst
Write to file
● Scheduler is an overhead hence it must
be as simple as possible Wait for I/O I/O burst
● If a scheduler is complex then it also
consumes more CPU burst Load store
add store CPU burst
read from file

Wait for I/O I/O burst

83
CPU Scheduler
● Whenever CPU becomes idle, the OS must select one of the processes in
the ready queue to be executed.
● The selection process is carried out by the CPU scheduler.
● The ready queue may be ordered in various ways.
● CPU scheduling decisions may takes place when a process
1. Process switches from running state to waiting state
2. Process switches from running state to ready state
3. Process switches from waiting state to ready state
4. When a process terminates
● For situations 1 and 4, there is no choice in terms of scheduling.
● There is a choice however for situations 2 and 3.

84
CPU Scheduling types
Preemptive Scheduling Non preemptive Scheduling

1. CPU resource can be taken 1. Once CPU has been allocated


to process, the process keeps
away from a process before it
until it releases the CPU:
finishes - Either by terminating
- Process can be moved from - Or by switching to the waiting
running state to ready state
- Preempted when higher priority - There is no other way by which
CPU can be taken away from
process comes
the process
2. Overheads of switching 2. No overheads of switching
process states states
3. No starvation 3. May lead to starvation

85
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Writing array A Reading array A

P1 P2

CPU

CPU is idle

86
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Reading array A
Writing array A
P2

CPU
P1

P1 is
scheduled
on CPU
87
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Reading array A

Writing array A
P2

CPU
P1

P1 is
scheduled
on CPU
88
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Reading array A

Writing array A
P2

CPU
P1 completes writing on P1
array A till i = 255

P1 is
scheduled
on CPU
89
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Reading array A
P1

P2

CPU
P1 preempted from CPU

Array is partly updated


by P1 CPU is idle

90
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …
Reading array A
P1

CPU
P2 Scheduled on CPU
P2

P2 is
scheduled
on CPU
91
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …

P1
Reading array A

CPU
P2 starts reading partly
P2
updated array A by P1

P2 is
scheduled
on CPU
92
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …

P1
Reading array A

CPU
P2 starts reading partly
P2
updated array A by P1

P2 is
scheduled
on CPU
93
Preemptive scheduling may result into race conditions

… …
… …
for(int i = 0; i < 1000; i++)
{
RAM for(int i = 0; i < 1000; i++)
{
A[i] = A[i] + 5; B[i] = A[i];
} }
… Array A …
… …

P1
Reading array A

CPU
P2 starts reading partly
P2 Data
updated array A by P1
Inconsistency
P2 is
scheduled
on CPU
94
Dispatcher
P0
executing
Dispatcher module gives control of the CPU
to the process selected by CPU scheduler.
This involves:-
Save state
1. Switching context into PCB0
2. Switching to user mode
3. Jumping to a proper location in the
Dispatch
user program to resume that program Restore state Latency
from PCB1
Dispatch Latency:- Time it takes for the
dispatcher to stop one process and start
another running. P1
It is expected that dispatch latency should executing
be as less as possible
95
Scheduling criteria

● CPU utilization:- Amount of time where CPU is busy. Ideally 100 %


CPU utilization is expected but it is less due to context switching
overheads.
● Throughput:- Number of processes that complete their execution
per unit time.
● Turnaround time:- Amount of time to execute a process
● Waiting time:- total amount of time a process has been waiting in
the ready queue.
● Response time:- Amount of time it takes from when a process
request was submitted until the first response is produced (no
output) for time sharing environment.

96
Optimization criteria

● Maximize CPU utilization


● Maximize throughput
● Minimize turnaround time
● Minimize waiting time
● Minimize response time

97
Scheduling Algorithms

● First-come-first serve (FCFS)


● Shortest job first (SJF)
● Round robin scheduling (RR)
● Priority Scheduling (PR)
● Multi-level queue scheduling

98
First Come First Serve 99
First come first serve scheduling Scenario 1

Consider following three processes and their burst time


Consider processes arrives in order: P1, P2, P3

P1 P2 P3
Process Burst time

P1 24

P2 3

P3 3
0 24 27 30

Waiting time for P1 :- 0 Waiting time for P2 :- 24 Waiting time for P3 :- 27

Average Waiting time = (0 + 24 + 27)/3 = 17

100
First come first serve scheduling Scenario 2

Consider following three processes and their burst time


Consider processes arrives in order: P1, P2, P3

P2 P3 P1 Arrival time is changed


Process Burst time

P1 24

P2 3

P3 3 0 3 6 30

Waiting time for P1 :- 6 Waiting time for P2 :- 0 Waiting time for P3 :- 3

Average Waiting time = (0 + 6 + 3)/3 = 3


101
Convoy effect in FCFS
● FCFS algorithm is non-preemptive in nature, that is, once CPU time has
been allocated to a process, other processes can get CPU time only after
the current process has finished.
● This property of FCFS scheduling leads to the situation called Convoy
Effect.

Shorter Jobs Longer Job

102
Convoy effect in FCFS
Suppose there is one CPU intensive (large burst time) process in the ready queue, and several other processes
with relatively less burst times but are Input/Output (I/O) bound (Need I/O operations frequently).

Steps are as following below:

● The I/O bound processes are first allocated CPU time. As they are less CPU intensive, they quickly get
executed and goto I/O queues.
● Now, the CPU intensive process is allocated CPU time. As its burst time is high, it takes time to complete.
● While the CPU intensive process is being executed, the I/O bound processes complete their I/O
operations and are moved back to ready queue.
● However, the I/O bound processes are made to wait as the CPU intensive process still hasn’t finished.
This leads to I/O devices being idle.
● When the CPU intensive process gets over, it is sent to the I/O queue so that it can access an I/O device.
● Meanwhile, the I/O bound processes get their required CPU time and move back to I/O queue.
● However, they are made to wait because the CPU intensive process is still accessing an I/O device. As a
result, the CPU is sitting idle now. 103
Advantages of FCFS
● Simple to implement
● Eventually, every process will get a chance to run, so starvation
doesn't occur
● Scheduling overheads are less as no preemption

104
Disadvantages of FCFS
● There is no option for pre-emption of a process. If a process is started,
then CPU executes the process until it ends.
● The process with less execution time suffer i.e. waiting time is often quite
long.
● May suffer from Convoy effect, This effect results in lower CPU and device
utilization.
● FCFS algorithm is particularly troublesome for time-sharing systems,
where it is important that each user get a share of the CPU at regular
intervals.

105
Assignment 1

Process Burst Time Arrival Time

P1 6 2

P2 2 4

P3 8 1

P4 3 0

P5 4 5

Using FCFS Scheduling calculate average waiting time for the processes

106
Shortest Job First Scheduling
107
Shortest Job First Scheduling
● Shortest job first (SJF) or Shortest Job Next (SJN), is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. SJN is a non-preemptive algorithm.
● Shortest Job first has the advantage of having a minimum average waiting time among all
scheduling algorithms.
● It is a Greedy Algorithm.
● It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.
● It is practically infeasible as Operating System may not know burst time and therefore may not sort
them. While it is not possible to predict execution time, several methods can be used to estimate
the execution time for a job, such as a weighted average of previous execution times.
● SJF can be used in specialized environments where accurate estimates of running time are
available.

108
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 01

2 3 3

P1
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

109
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 3
01

2 3 3

P2
3 6 2 P1

4 7 10
Ready Queue
CPU
5 9 8

P1

1 3
110
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 6
01

2 3 3

P3 P2
3 6 2 P1

4 7 10
Ready Queue
CPU
5 9 8

P1

1 6
111
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 0
71

2 3 3

P4 P3 P2
3 6 2 P1

4 7 10
Ready Queue
CPU
5 9 8

P1

1 7
112
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 0
71

2 3 3

P4 P3 P2
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

P3 has shortest burst time in ready queue

P1

1 7
113
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 9
01

2 3 3

P5 P4 P2 P3
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

P1 P3

1 7 9
114
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 9
01

2 3 3

P5 P4 P2 P3
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

P2 has shortest burst time in ready queue

P1 P3

1 7 9
115
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 12
01

2 3 3

P5 P4 P2
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

P1 P3 P2

1 7 9 12
116
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 12
01

2 3 3

P5 P4
3 6 2

4 7 10
Ready Queue
CPU
5 9 8

P5 has shortest burst time in ready queue

P1 P3 P2

1 7 9 12
117
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 20
12

2 3 3

P4
3 6 2 P5

4 7 10
Ready Queue
CPU
5 9 8

P1 P3 P2 P5

1 7 9 12 20
118
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2 P4

4 7 10
Ready Queue
CPU
5 9 8

P1 P3 P2 P5 P4

1 7 9 12 20 30
119
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0
waiting time =
P1 P3 P2 P5 P4
start time - arrival time

P1 1-1=0 1 7 9 12 20 30
120
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0 6
waiting time =
P1 P3 P2 P5 P4
start time - arrival time

P2 9-3=6 1 7 9 12 20 30
121
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0 1 6
waiting time =
P1 P3 P2 P5 P4
start time - arrival time

P3 7-6=1 1 7 9 12 20 30
122
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0 1 6 13
waiting time =
P1 P3 P2 P5 P4
start time - arrival time

P4 20 - 7 = 13 1 7 9 12 20 30
123
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0 1 6 3 13
waiting time =
P1 P3 P2 P5 P4
start time - arrival time

P5 12 - 9 = 3 1 7 9 12 20 30
124
Shortest Job First Scheduling
PID Arrival Time Burst Time

1 1 7
Current Time 30
20

2 3 3

3 6 2

4 7 10
Ready Queue
CPU
5 9 8

0 1 6 3 13
Average waiting Time
P1 P3 P2 P5 P4
(0 + 6 + 1 + 13 + 3)/5
= 23/5 1 7 9 12 20 30
= 4.6
125
Assignment 2

Process Arrival Time Burst Time

P1 0.0 6

P2 2.0 8

P3 4.0 7

P4 5.0 3

Find average waiting time for the given processes considering SJF scheduling

126
Problem with SJF
● Not possible to know next CPU burst time in advance
● While it is not possible to predict next CPU burst accurately, some
approximation methods can be used.
● The next CPU burst of a process can be predicted from previous burst
● But this is only an estimate, it may or may not work
● Exponential averaging of previous CPU burst can be used to predict next
CPU burst

where α = is smoothing factor and 0 <= α <= 1 ,


tn = actual burst time of nth process,
Τn = predicted burst time of nth process.

127
Problem with SJF
● Not possible to know next CPU burst time in advance
● While it is not possible to predict next CPU burst accurately, some
approximation methods can be used.
● The next CPU burst of a process can be predicted from previous burst
● But this is only an estimate, it may or may not work
● Exponential averaging of previous CPU burst can be used to predict next
CPU burst

αtn + (1 - α)αtn-1 + (1 - α)2αtn-2...+ (1 - α)jαtn-j...+ (1 - α)n+1Τ0

128
Advantages of SJF
● Shortest jobs are favored.
● It is optimal,it gives the minimum average waiting time for a given set
of processes and used as benchmark for other scheduling
algorithms.
● The throughput is increased because more processes can be
executed in less amount of time.

129
Disadvantages of SJF
● SJF may cause starvation, if shorter processes keep coming.
● Longer processes will have more waiting time, eventually they'll suffer
starvation.
● The time taken by a process must be known by the CPU beforehand,
which is not possible.

130
Shortest Remaining Time First
131
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 0

2 1 4

P1
3 2 9

4 3 5
Ready Queue
CPU

P1 arrives in ready queue

132
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 0

2 1 4

3 2 9 P1

4 3 5
Ready Queue
CPU

P1 starts execution

133
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 01

2 1 4

P2
3 2 9 P1

4 3 5
Ready Queue
CPU
P2 arrives at time 1.

Remaining time of P1 is 7 and remaining time of P2 is 4

P1

0 1
134
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 01

2 1 4

P2
3 2 9 P1

4 3 5
Ready Queue
CPU

Preempt P1 and keep in ready queue, schedule P2 for


execution

P1

0 1
135
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 01

2 1 4

P1
3 2 9 P2

4 3 5
Ready Queue
CPU

Preempt P1 and keep in ready queue, schedule P2 for


execution

P1

0 1
136
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 2
01

2 1 4

P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU
Execute P2 for one time unit
P3 arrives at time 2 and placed in ready queue

P1 P2

0 1 2
137
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 2
01

2 1 4

P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU

In ready queue remaining time for P1 is 7, P3 is 9 and for P2 is 3


so continue execution of P2

P1 P2

0 1 2
138
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 3
01

2 1 4

P4 P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU

In ready queue remaining time for P1 is 7, P3 is 9, P4 is 5 and for P2 is


2 so continue execution of P2 and P4 arrives in ready queue

P1 P2

0 1 3
139
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 4
01

2 1 4

P4 P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU

In ready queue remaining time for P1 is 7, P3 is 9, P4 is 5 and for P2 is


2 so continue execution of P2

P1 P2

0 1 4
140
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 5
01

2 1 4

P4 P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU

In ready queue remaining time for P1 is 7, P3 is 9, P4 is 5 and for P2 is


1 so continue execution of P2

P1 P2

0 1 5
141
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 5
01

2 1 4

P4 P3 P1
3 2 9 P2

4 3 5
Ready Queue
CPU

P2 completed its execution

P1 P2

0 1 5
142
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 5
01

2 1 4

P4 P3 P1
3 2 9

4 3 5
Ready Queue
CPU

The remaining time of P1 is 7 P3 is 9 and P4 is 5 so P4 gets CPU

P1 P2

0 1 5
143
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 5
01

2 1 4

P3 P1
3 2 9 P4

4 3 5
Ready Queue
CPU

The remaining time of P1 is 7 P3 is 9 and P4 is 5 so P4 gets CPU

P1 P2

0 1 5
144
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 6
01

2 1 4

P3 P1
3 2 9 P4

4 3 5
Ready Queue
CPU

P4 executes for one unit time. The remaining time of P1 is 7 P3 is


9 and P4 is 4 so P4 continue execution

P1 P2 P4

0 1 5 6
145
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 10
01

2 1 4

P3 P1
3 2 9 P4

4 3 5
Ready Queue
CPU

P4 executes for one unit time. The remaining time of P1 is 7 P3 is


9 and P4 is 4 so P4 continue execution till it completes

P1 P2 P4

0 1 5 10
146
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 10
01

2 1 4

P3 P1
3 2 9

4 3 5
Ready Queue
CPU

The remaining time of P1 is 7 P3 is 9. So P1 gets scheduled

P1 P2 P4

0 1 5 10
147
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 10
01

2 1 4

P3
3 2 9 P1

4 3 5
Ready Queue
CPU

The remaining time of P1 is 7 P3 is 9. So P1 gets scheduled

P1 P2 P4

0 1 5 10
148
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 17
9
0

2 1 4

P3
3 2 9 P1

4 3 5
Ready Queue
CPU

P1 completes execution

P1 P2 P4 P1

0 1 5 10 17
149
Shortest Remaining Time First (SRTF) - Preemptive SJF
PID Arrival Time Burst Time

1 0 8
Current Time 26
90

2 1 4

3 2 9 P3

4 3 5
Ready Queue
CPU

Only P3 in ready queue so it completes the execution

P1 P2 P4 P1 P3

0 1 5 10 17 26
150
Shortest Remaining Time First (SRTF) - Preemptive SJF

PID Arrival Time Burst Time Completion Turn Around Time Waiting Time (WT)
(AT) (BT) Time (TAT) = TAT - BT
(CT) = CT - AT

1 0 8 17 17 9

2 1 4 5 4 0

3 2 9 26 24 15

4 3 5 10 7 2

Average waiting time = (9 + 0 + 15 + 2)/4 = 6.5

P1 P2 P4 P1 P3

0 1 5 10 17 26 151
Assignment 3

PID Arrival Time Burst Time


(AT) (BT)

1 0 8

2 1 4

3 2 2

4 3 1

5 4 3

6 5 2

Consider above processes with Arrival time and Burst time.


Calculate CT, TAT, WT and average wait time using SRTF algorithm
152
Round Robin Scheduling
153
Round Robin Scheduling

● Each process gets small unit of CPU time (time quantum q).
● After time ‘q’ elapsed, the process is preempted and added to the
end of the ready queue.
● If there are ‘N’ processes in the ready queue and the time
quantum is ‘q’, then each process gets 1/N of the CPU time in
chunks of at most ‘q’ time units at once.
● No process waits for more than (N-1)*q time units.
● Timer interrupts every quantum to schedule process
● In case a process needs less time than ‘q’ then also next process
gets scheduled

154
Round Robin Scheduling

● Performance:-

If ‘q’ is large the RR behaves as FCFS

If ‘q’ is too small then it increases context switch overheads

155
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 0

2 0 3

P3 P2 P1
3 0 3

Ready Queue
CPU

As arrival time is 0 for all processes, all of them enters into ready queue

156
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 0

2 0 3

P3 P2 P1
3 0 3

Ready Queue
CPU
Let P1 gets scheduled first

157
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 4


0

2 0 3

P3 P2 P1
3 0 3

Ready Queue
CPU
P1 executes for 4 time units as time quanta value is 4

P1

0 4
158
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 4


0

2 0 3

P1 P3 P2
3 0 3

Ready Queue
CPU
As time quantum for P1 is elapsed, P1 is moved to ready queue and P2 is
scheduled next

P1

0 4
159
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 4


0

2 0 3

P1 P3 P2
3 0 3

Ready Queue
CPU
P2 gets access to CPU and starts execution

P1

0 4
160
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 4


7

2 0 3

P1 P3 P2
3 0 3

Ready Queue
CPU
Burst time for P2 is 3 units which is less than Time quantum 4. P2 gets
executed for 3 time units and terminates

P1 P2

0 4 7
161
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 4


7

2 0 3

P1 P3
3 0 3

Ready Queue
CPU
P3 is next in the ready queue and gets chance for execution

P1 P2

0 4 7
162
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 10


7

2 0 3

P1 P3
3 0 3

Ready Queue
CPU
Burst time for P3 is 3 units which is less than Time quantum 4. P3 gets
executed for 3 time units and terminates

P1 P2 P3

0 4 7 10
163
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 10


7

2 0 3

3 0 3 P1

Ready Queue
CPU
Now only P1 is in ready queue so it starts execution

P1 P2 P3

0 4 7 10
164
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 10


7

2 0 3

3 0 3 P1

Ready Queue
CPU
P1 has already completed 4 time units, so remaining CPU burst is 20 i.e.
greater than time quantum.

P1 P2 P3

0 4 7 10
165
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 14


10

2 0 3

3 0 3 P1

Ready Queue
CPU
P1 executes for 4 time units

P1 P2 P3 P1

0 4 7 10 14
166
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 30


22
26
14
18

2 0 3

3 0 3 P1

Ready Queue
CPU
There is no other process in ready queue so P1 completes execution by
utilizing all time quanta.

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
167
Round Robin Scheduling Example
PID Arrival Time Burst Time

1 0 24 Time Quantum 4 Current Time 30


22
26
14
18

2 0 3

3 0 3 P1

Ready Queue
CPU
Finally P1 terminates

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
168
Round Robin Scheduling Example
PID Arrival Time Burst Time Completion Time Turn Around Time (TAT) Waiting Time (WT)
(AT) (BT) (CT) = CT - AT = TAT -BT

1 0 24 30 30 6

2 0 3 7 7 4

3 0 3 10 10 7

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

169
Time Quantum and Context Switch Time

● The performance of RR depends on the size of the time quantum.


● If the time quantum is very small RR results in a large number of
context switches
Quantum Context Switches

12 0

0 10

6 1

0 6 10

1 9

170
0 10
Advantages of Round Robin Scheduling

1. Every process gets an equal share of the CPU.


2. RR is cyclic in nature, so there is no starvation.
3. No issues of starvation or convoy effect.

4. Doesn’t depend on burst time and is easily implementable.

5. Best performance in terms of average response time.

171
Disadvantages of Round Robin Scheduling

1. Setting the quantum too short, increases the overhead and lowers the CPU
efficiency.
2. Setting the quantum too long may cause poor response to short
processes.
3. Average waiting time under the RR policy is often long.
4. Spends more time on context switching.
5. Performance depends on time quantum.
6. Higher context switching overhead due to lower time quantum.
7. Difficult to find a correct time quantum.

172
Assignment No. 4
PID Arrival Time Burst Time (BT) Completion Time Turn Around Time (TAT) Waiting Time (WT)
(AT) (CT) = CT - AT = TAT - BT

1 0 5

2 1 6

3 2 3

4 3 1

5 4 5

6 6 4

Apply round robin scheduling algorithm with time quantum = 2, for above processes
and calculate, TAT, CT, WT and average WT.
Note:- In this example arrival time also mentioned
173
Priority scheduling
174
Priority Scheduling
● A priority number (integer) is associated with every process
● The CPU is allocated to the process with the highest priority
● If two processes with same priority the use FCFS to break the tie
or use round robin algorithm to allocate CPU
● Smallest integer number indicates the highest priority
● Priority scheduling can be preemptive or non preemptive
● SJF is priority scheduling where, priority is inverse of predicted
next CPU burst
Problem:- Starvation - Low priority process may never get executed
Solution:- Aging - As time progresses increase the priority of
processes waiting in ready queue

175
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 0
1 0 4 1

2 1 5 0
P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

At time 0 only process P1 arrives in ready queue.

176
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 01
1 0 4 1

2 1 5 0
P2 P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P1 gets scheduled on CPU P1 executes for 1 time unit P2 arrives in ready queue
at time 1

P1

0 1 177
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 21
1 0 4 1

2 1 5 0
P3 P2 P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

As this is non preemptive algorithm, P1 continues to execute P3 arrives at time 2

P1 P1

0 1 2 178
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 3
2
1 0 4 1

2 1 5 0
P4 P3 P2 P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P1 continues to execute P4 arrives at time 3

P1 P1 P1

0 1 2 3 179
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
3
1 0 4 1

2 1 5 0
P4 P3 P2 P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P1 continues to execute

P1 P1 P1 P1

0 1 2 3 4 180
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
3
1 0 4 1

2 1 5 0
P4 P3 P2 P1
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P1 Completes execution

P1 P1 P1 P1

0 1 2 3 4 181
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
3
1 0 4 1

2 1 5 0
P4 P3 P2
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

There are 3 processes in ready queue out of which P2 has highest priority so P2 gets
executed

P1 P1 P1 P1

0 1 2 3 4 182
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
3
1 0 4 1

2 1 5 0
P4 P3 P2
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

There are 3 processes in ready queue out of which P2 has highest priority so P2 gets
executed

P1 P1 P1 P1

0 1 2 3 4 183
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
5
1 0 4 1

2 1 5 0
P5 P4 P3 P2
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P2 executes for 1 time unit P5 arrives in ready queue

P1 P1 P1 P1 P2

0 1 2 3 4 5 184
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 5
9
1 0 4 1

2 1 5 0
P5 P4 P3 P2
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

As this is non preemptive algorithm P2 gets executed till it terminates

P1 P1 P1 P1 P2

0 1 2 3 4 5 9 185
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
9
1 0 4 1

2 1 5 0
P5 P4 P3
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P3, P4, P5 in ready queue, where P5 has highest priority P5 scheduled

P1 P1 P1 P1 P2

0 1 2 3 4 9 186
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
9
1 0 4 1

2 1 5 0
P4 P3 P5
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P3, P4, P5 in ready queue, where P5 has highest priority P5 scheduled

P1 P1 P1 P1 P2

0 1 2 3 4 9 187
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 11
9
1 0 4 1

2 1 5 0
P4 P3 P5
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P5 completes execution P5 Terminated

P1 P1 P1 P1 P2 P5

0 1 2 3 4 9 11 188
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 11
9
1 0 4 1

2 1 5 0
P4 P3
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P3 and P4 in ready queue, where p3 has highest priority

P1 P1 P1 P1 P2 P5

0 1 2 3 4 9 11 189
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 14
11
1 0 4 1

2 1 5 0
P4 P3
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P3 scheduled for execution P3 completes execution

P1 P1 P1 P1 P2 P5 P3

0 1 2 3 4 9 11 14 190
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 14
1 0 4 1

2 1 5 0
P4
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

Only P4 left in ready queue.

P1 P1 P1 P1 P2 P5 P3

0 1 2 3 4 9 11 14 191
Priority Scheduling - Non preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 14
16
1 0 4 1

2 1 5 0

P4
3 2 3 2

4 3 2 4
Ready Queue
CPU
5 5 2 1

P4 gets scheduled for P4 completes execution. P4 terminates


execution.

P1 P1 P1 P1 P2 P5 P3 P4

0 1 2 3 4 9 11 14 16 192
Priority Scheduling - Non preemptive Example
PID Arrival Time Burst Time Priority Completion Time Turn Around Time Waiting Time
(AT) (BT) (CT) (TAT) = TAT - BT
= CT-AT

1 0 4 1 4 4 0

2 1 5 0 9 8 3

3 2 3 2 14 12 9

4 3 2 4 16 13 11

5 5 2 1 11 6 4

Average waiting time = (0 + 3 + 9 + 11 + 4)/5 = 27/5 = 5.4

P1 P1 P1 P1 P2 P5 P3 P4

0 1 2 3 4 9 11 14 16 193
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 0
1 0 1 2(H)

2 1 7 6

3 2 3 3
P1

4 3 6 5
Ready Queue
5 4 5 4
CPU
At time 0 P1 arrives in ready queue
6 5 15 10(L)
As no other process in ready queue, P1 gets scheduled.
7 15 8 9

194
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 01
1 0 1 2(H)

2 1 7 6

3 2 3 3 P1

4 3 6 5
Ready Queue
5 4 5 4
CPU

6 5 15 10(L) P1 completes execution for 1 time unit

7 15 8 9 P1 completed execution and terminates

P1

0 1 195
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 01
1 0 1 2(H)

2 1 7 6

P2
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
At time 1 P2 arrives in ready queue.
6 5 15 10(L)

7 15 8 9
There no process executing on CPU so P2 starts
execution

P1

0 1 196
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 21
1 0 1 2(H)

2 1 7 6

P3
3 2 3 3 P2

4 3 6 5
Ready Queue
5 4 5 4
CPU
P2 continue execution for 1 time unit
6 5 15 10(L)

7 15 8 9
At time 2 P3 arrives in ready queue

P1 P2

0 1 2 197
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 21
1 0 1 2(H)

2 1 7 6

P3
3 2 3 3 P2

4 3 6 5
Ready Queue
5 4 5 4
CPU
P2 has priority 6, P3 has 3, so P2 gets preempted and P3 is
6 5 15 10(L) scheduled for execution
7 15 8 9

P1 P2

0 1 2 198
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 3
2
1 0 1 2(H)

2 1 7 6

P4 P2
3 2 3 3 P3

4 3 6 5
Ready Queue
5 4 5 4
CPU
P3 completes execution for 1 unit
6 5 15 10(L)

7 15 8 9
P4 arrives in ready queue at time 3

P1 P2 P3

0 1 2 3 199
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
3
1 0 1 2(H)

2 1 7 6

P5 P4 P2
3 2 3 3 P3

4 3 6 5
Ready Queue
5 4 5 4
CPU
P3 has higher priority than P4 and P2, so P3 continues
6 5 15 10(L) the execution
7 15 8 9 P3 executes for 1 time unit
At time 4, P5 arrives in ready queue

P1 P2 P3 P3

0 1 2 3 4 200
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 4
5
1 0 1 2(H)

2 1 7 6

P6 P5 P4 P2
3 2 3 3 P3

4 3 6 5
Ready Queue
5 4 5 4
CPU
Again P3 has higher priority than all the processes in
6 5 15 10(L) ready queue, so continues the execution
7 15 8 9 P3 continue execution for 1 time unit
At time 5 P6 arrives in ready

P1 P2 P3 P3 P3

0 1 2 3 4 5 201
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 5
1 0 1 2(H)

2 1 7 6

P6 P5 P4 P2
3 2 3 3 P3

4 3 6 5
Ready Queue
5 4 5 4
CPU

6 5 15 10(L) Burst time of P3 is over, so P3 terminates


7 15 8 9

P1 P2 P3 P3 P3

0 1 2 3 4 5 202
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 5
1 0 1 2(H)

2 1 7 6

P6 P5 P4 P2
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU

6 5 15 10(L)
There are 4 processes in ready queue P2, P4, P5, P6 out of
which P5 has highest priority so P5 gets scheduled for
7 15 8 9 execution

P1 P2 P3 P3 P3

0 1 2 3 4 5 203
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 10
5
1 0 1 2(H)

2 1 7 6

P6 P4 P2 P5
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
As there are no new process arrives in ready queue while
6 5 15 10(L) execution of P5, P5 continues to execute till it terminates
P5 terminates after completion of burst time
7 15 8 9

P1 P2 P3 P3 P3 P5

0 1 2 3 4 5 10 204
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 10
5
1 0 1 2(H)

2 1 7 6

P6 P4 P2
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
Now, P2, P4 and P6 are present in ready queue out of
6 5 15 10(L) which P4 has highest priority, so P4 gets scheduled to
execute
7 15 8 9

P1 P2 P3 P3 P3 P5

0 1 2 3 4 5 10 205
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 15
10
1 0 1 2(H)

2 1 7 6

P7 P6 P2 P4
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
P4 executes till time 15
6 5 15 10(L) At time = 15, P7 arrives in ready queue
7 15 8 9

P1 P2 P3 P3 P3 P5 P4

0 1 2 3 4 5 10 15 206
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 15
16
1 0 1 2(H)

2 1 7 6

P7 P6 P2 P4
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
As P4 has higher priority than P2, P6 and P7 it continues to
6 5 15 10(L)
execute
7 15 8 9 Burst time of P4 is over so it terminates

P1 P2 P3 P3 P3 P5 P4 P4

0 1 2 3 4 5 10 15 16 207
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 15
16
1 0 1 2(H)

2 1 7 6

P7 P6 P2
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
Three processes in ready queue, P7, P6, and P2 out of which
6 5 15 10(L)
P2 has highest priority
7 15 8 9 P2 gets scheduled for execution

P1 P2 P3 P3 P3 P5 P4 P4

0 1 2 3 4 5 10 15 16 208
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 22
16
1 0 1 2(H)

2 1 7 6

P7 P6 P2
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
As no new process a arrives during execution of process
6 5 15 10(L)
P2, it executes till completion
7 15 8 9

P1 P2 P3 P3 P3 P5 P4 P4 P2

0 1 2 3 4 5 10 15 16 22 209
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 22
16
1 0 1 2(H)

2 1 7 6

P7 P6
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
Two processes in ready queue, P6 and P7 out of which P7
6 5 15 10(L)
has higher priority
7 15 8 9 P7 gets scheduled for execution

P1 P2 P3 P3 P3 P5 P4 P4 P2

0 1 2 3 4 5 10 15 16 22 210
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 30
22
1 0 1 2(H)

2 1 7 6

P6 P7
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
As no process arrives during execution of P7, P7 completes
6 5 15 10(L)
execution without preemption
7 15 8 9

P1 P2 P3 P3 P3 P5 P4 P4 P2 P7

0 1 2 3 4 5 10 15 16 22 30 211
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 30
22
1 0 1 2(H)

2 1 7 6

P6
3 2 3 3

4 3 6 5
Ready Queue
5 4 5 4
CPU
Only P6 is in ready queue.
6 5 15 10(L)
P6 gets scheduled on CPU
7 15 8 9

P1 P2 P3 P3 P3 P5 P4 P4 P2 P7

0 1 2 3 4 5 10 15 16 22 30 212
Priority Scheduling - preemptive Example
PID Arrival Burst Priority
Time Time
Current Time 45
30
1 0 1 2(H)

2 1 7 6

3 2 3 3 P6

4 3 6 5
Ready Queue
5 4 5 4
CPU
P6 completes its execution then terminates
6 5 15 10(L)

7 15 8 9

P1 P2 P3 P3 P3 P5 P4 P4 P2 P7 P6

0 1 2 3 4 5 10 15 16 22 30 45 213
Priority Scheduling - preemptive Example
PID Arrival Burst Priority Completion Time Turn Around Time (TAT) Waiting Time
Time Time (CT) = CT-AT = TAT - BT

1 0 1 2(H) 1 1 0

2 1 7 6 22 21 14

3 2 3 3 5 3 0

4 3 6 5 16 13 7

5 4 5 4 10 6 1

6 5 15 10(L) 45 40 25

7 15 8 9 30 15 7

P1 P2 P3 P3 P3 P5 P4 P4 P2 P7 P6

0 1 2 3 4 5 10 15 16 22 30 45 214
Advantages of Priority Scheduling
● This provides a good mechanism where the relative importance of
each process may be precisely defined
● It is easy to use.
● It is a user friendly algorithm.
● Simple to understand.

215
Disadvantages of Priority Scheduling
● If high priority processes use up a lot of CPU time, lower priority
processes may starve and be postponed indefinitely (starvation).
● The problem of starvation can be solved by aging.
● Aging is a technique in which the system gradually increases the
priority of those processes which are waiting in the system from a
long time for their execution.
● Another problem is deciding which process gets which priority
level assigned to it.

216
Assignment No. 5
PID Arrival Time Burst Time Priority
(AT) (BT)

1 0 4 2

2 1 3 3

3 3 1 4

4 4 5 5

5 5 2 5

Consider the set of 5 processes whose arrival time and burst time are given
above. If the CPU scheduling policy is priority preemptive, calculate the
average waiting time and average turnaround time. (Higher number
represents higher priority)
217
Assignment No. 6
PID Arrival Burst Time Priority
Time (BT)
(AT)

1 0 4 2

2 1 3 3

3 3 1 4

4 4 5 5

5 5 2 5

Consider the set of 5 processes whose arrival time and burst time are given
above. If the CPU scheduling policy is priority Non-preemptive, calculate the
average waiting time and average turnaround time. (Higher number
represents higher priority)
218
Combination of Priority and Round Robin

PID Burst Time Priority


(BT)

1 4 3

2 5 2

3 8 2

4 7 1

5 3 3

P4 P2 P3 P2 P3 P2 P3 P1 P5 P1 P5

0 7 9 11 13 15 16 20 22 24 26 27
219
Multilevel Queue Scheduling
● Ready queue is partitioned into separate queues such as
foreground (user’s jobs), background (System’s jobs)
● Process permanently in a given queue
● Each queue has its own scheduling algorithm

RR

Foreground Ready Queue

FCFS
CPU
Background Ready Queue
220
Multilevel Queue Scheduling
● Scheduling must be done between the queues
- Fixed priority scheduling i.e. serve all from foreground then
background. Possibility of starvation
- Each queue get certain amount of CPU time in which it can
schedule amongst its processes. Maybe 80% for foreground and
20% for background
RR

Foreground Ready Queue

FCFS
CPU
Background Ready Queue
221
Multilevel Queue Scheduling
● Another way we can consider one queue for each priority
● Once jobs in high priority queue are over then move to low priority queue
● Movement of a process from one queue to another is not possible

Queue for Priority 1 processes

Queue for Priority 2 processes

CPU
Queue for Priority 3 processes
222
Multilevel Feedback Queue
● A process can move from various queues
● Scheduler is defined by following parameters
- Number of queues
- Scheduling algorithm for each queue
- Method used to determine when to upgrade a process
- Method used to determine when to demote a process
- In which queue a process will enter at start

223
Multilevel Feedback Queue Example
Q0

● A new job enters in Q0 Quantum = 8 ms


When it gains CPU, job receives 8 ms
If it does not finish in 8 sec, job is
Q1
moved to Q1
● At Q1 job receives additional 16 ms Quantum = 16 ms
● If it still does not complete, it is
preempted and moved to
Q2
FCFC
Jobs in lower queue gets executed only if
upper queue is empty

224
We are done with Second module

225

You might also like