Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views54 pages

OS Unit 2

Operating system unit 2

Uploaded by

SOHAN BAGADE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views54 pages

OS Unit 2

Operating system unit 2

Uploaded by

SOHAN BAGADE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Unit II - Process and Thread Management

Introduction of Process Management

Process Management for a single tasking or batch processing system is easy


as only one process is active at a time. With multiple processes
(multiprogramming or multitasking) being active, the process management
becomes complex as a CPU needs to be efficiently utilized by multiple
processes. Multiple active processes may share resources like memory and
may communicate with each other.

Process Management Tasks


Process management is a key part in operating systems with multi-
programming or multitasking.

● Process Creation and Termination : Process creation involves

creating a Process ID, setting up Process Control Block, etc. A

process can be terminated either by the operating system or by the

parent process. Process termination involves clearing all resources

allocated to it.

● CPU Scheduling : In a multiprogramming system, multiple processes

need to get the CPU. It is the job of Operating System to ensure

smooth and efficient execution of multiple processes.

● Deadlock Handling : Making sure that the system does not reach a

state where two or more processes cannot proceed due to cyclic

dependency on each other.


● Inter-Process Communication : Operating System provides facilities

such as shared memory and message passing for cooperating

processes to communicate.

● Process Synchronization : Process Synchronization is the

coordination of execution of multiple processes in a

multiprogramming system to ensure that they access shared

resources (like memory) in a controlled and predictable manner.

Process Operations
Please remember a process goes through different states before termination
and these state changes require different operations on processes by an
operating system. These operations include process creation, process
scheduling, execution and killing the process. Here are the key process
operations:

PROCESS:

A process is basically a program in execution. The execution of a process must


progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.

When a program is loaded into the memory and it becomes a process, it can be divided into
four sections stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −

S.N Component & Description

1 Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.

States of a Process in Operating Systems


Each process goes through several stages throughout its life cycle.
Process Lifecycle
When you run a program (which becomes a process), it goes through different
phases before it completion. These phases, or states, can vary depending on
the operating system, but the most common process lifecycle includes two,
five, or seven states. Here’s a simple explanation of these states:

The Two-State Model


The simplest way to think about a process's lifecycle is with just two states:

1. Running: This means the process is actively using the CPU to do its

work.

2. Not Running: This means the process is not currently using the

CPU. It could be waiting for something, like user input or data, or it

might just be paused.


Two State Process Model

When a new process is created, it starts in the not running state. Initially, this
process is kept in a program called the dispatcher.

Here’s what happens step by step:

1. Not Running State: When the process is first created, it is not using

the CPU.

2. Dispatcher Role: The dispatcher checks if the CPU is free (available

for use).

3. Moving to Running State: If the CPU is free, the dispatcher lets the

process use the CPU, and it moves into the running state.

4. CPU Scheduler Role: When the CPU is available, the CPU

scheduler decides which process gets to run next. It picks the

process based on a set of rules called the scheduling scheme,

which varies from one operating system to another.

The Five-State Model


The five-state process lifecycle is an expanded version of the two-state
model. The two-state model works well when all processes in the not running
state are ready to run. However, in some operating systems, a process may
not be able to run because it is waiting for something, like input or data from
an external device. To handle this situation better, the not running state is
divided into two separate states:

Five state Process Model

Here’s a simple explanation of the five-state process model:

● New: This state represents a newly created process that hasn’t

started running yet. It has not been loaded into the main memory,

but its process control block (PCB) has been created, which holds

important information about the process.

● Ready: A process in this state is ready to run as soon as the CPU

becomes available. It is waiting for the operating system to give it a

chance to execute.
● Running: This state means the process is currently being executed

by the CPU. Since we’re assuming there is only one CPU, at any

time, only one process can be in this state.

● Blocked/Waiting: This state means the process cannot continue

executing right now. It is waiting for some event to happen, like the

completion of an input/output operation (for example, reading data

from a disk).

● Exit/Terminate: A process in this state has finished its execution or

has been stopped by the user for some reason. At this point, it is

released by the operating system and removed from memory.

How Does a Process Move From One State to Other


State?
A process can move between different states in an operating system based on
its execution status and resource availability. Here are some examples of how
a process can move between different states:

● New to Ready: When a process is created, it is in a new state. It

moves to the ready state when the operating system has allocated

resources to it and it is ready to be executed.

● Ready to Running: When the CPU becomes available, the operating

system selects a process from the ready queue depending on

various scheduling algorithms and moves it to the running state.


● Running to Blocked: When a process needs to wait for an event to

occur (I/O operation or system call), it moves to the blocked state.

For example, if a process needs to wait for user input, it moves to

the blocked state until the user provides the input.

● Running to Ready: When a running process is preempted by the

operating system, it moves to the ready state. For example, if a

higher-priority process becomes ready, the operating system may

preempt the running process and move it to the ready state.

● Blocked to Ready: When the event a blocked process was waiting

for occurs, the process moves to the ready state. For example, if a

process was waiting for user input and the input is provided, it

moves to the ready state.

● Running to Terminated: When a process completes its execution or

is terminated by the operating system, it moves to the terminated

state.

Types of Schedulers
● Long-Term Scheduler: Decides how many processes should be

made to stay in the ready state. This decides the degree of

multiprogramming. Once a decision is taken it lasts for a long time

which also indicates that it runs infrequently. Hence it is called a

long-term scheduler.
● Short-Term Scheduler: Short-term scheduler will decide which

process is to be executed next and then it will call the dispatcher. A

dispatcher is a software that moves the process from ready to run

and vice versa. In other words, it is context switching. It runs

frequently. Short-term scheduler is also called CPU scheduler.

● Medium Scheduler: Suspension decision is taken by the medium-

term scheduler. The medium-term scheduler is used for swapping

which is moving the process from main memory to secondary and

vice versa. The swapping is done to reduce degree of

multiprogramming.

Multiprogramming
We have many processes ready to run. There are two types of
multiprogramming:

● Preemption: Process is forcefully removed from CPU. Pre-emption

is also called time sharing or multitasking.

● Non-Preemption: Processes are not removed until they complete

the execution. Once control is given to the CPU for a process

execution, till the CPU releases the control by itself, control cannot

be taken back forcibly from the CPU.

Context Switching of Process


The process of saving the context of one process and loading the context of
another process is known as Context Switching. In simple terms, it is like
loading and unloading the process from the running state to the ready state.

When Does Context Switching Happen?

Context Switching Happen:

● When a high-priority process comes to a ready state (i.e. with higher

priority than the running process).

● An Interrupt occurs.

● User and kernel-mode switch (It is not necessary though)

● Preemptive CPU scheduling is used.

Context Switch vs Mode Switch

A mode switch occurs when the CPU privilege level is changed, for example
when a system call is made or a fault occurs. The kernel works in more a
privileged mode than a standard user task. If a user process wants to access
things that are only accessible to the kernel, a mode switch must occur. The
currently executing process need not be changed during a mode switch. A
mode switch typically occurs for a process context switch to occur. Only the
kernel can cause a context switch.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information

needed to keep track of a process as listed below in the table −


S.N Information & Description
.

1 Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to
be executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to
schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and


may contain different information in different operating systems.

Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.

Process Schedulers in Operating System


A process is the instance of a computer program in execution.

● Scheduling is important in operating systems with

multiprogramming as multiple processes might be eligible for

running at a time.
● One of the key responsibilities of an Operating System (OS) is to

decide which programs will execute on the CPU.

● Process Schedulers are fundamental components of operating

systems responsible for deciding the order in which processes are

executed by the CPU. In simpler terms, they manage how the CPU

allocates its time among multiple tasks or processes that are

competing for its attention.

What is Process Scheduling?

Process scheduling is the activity of the process manager that handles the

removal of the running process from the CPU and the selection of another

process based on a particular strategy. Throughout its lifetime, a process

moves between various scheduling queues, such as the ready queue, waiting

queue, or devices queue.

Categories of Scheduling
Scheduling falls into one of two categories:

● Non-Preemptive: In this case, a process's resource cannot be taken

before the process has finished running. When a running process

finishes and transitions to a waiting state, resources are switched.

● Preemptive: In this case, the OS can switch a process from running

state to ready state. This switching happens because the CPU may
give other processes priority and substitute the currently active

process for the higher priority process.

Please refer Preemptive vs Non-Preemptive Scheduling for details.

Types of Process Schedulers


There are three types of process schedulers:

1. Long Term or Job Scheduler

Long Term Scheduler loads a process from disk to main memory for execution.

The new process to the 'Ready State'.

● It mainly moves processes from Job Queue to Ready Queue.

● It controls the Degree of Multi-programming, i.e., the number of

processes present in a ready state or in main memory at any point in

time.

● It is important that the long-term scheduler make a careful selection

of both I/O and CPU-bound processes. I/O-bound tasks are which

use much of their time in input and output operations while CPU-

bound processes are which spend their time on the CPU. The job

scheduler increases efficiency by maintaining a balance between the

two.
● In some systems, the long-term scheduler might not even exist. For

example, in time-sharing systems like Microsoft Windows, there is

usually no long-term scheduler. Instead, every new process is

directly added to memory for the short-term scheduler to handle.

● Slowest among the three (that is why called long term).

2. Short-Term or CPU Scheduler

CPU Scheduler is responsible for selecting one process from the ready state

for running (or assigning CPU to it).

● STS (Short Term Scheduler) must select a new process for the CPU

frequently to avoid starvation.

● The CPU scheduler uses different scheduling algorithms to balance

the allocation of CPU time.

● It picks a process from ready queue.

● Its main objective is to make the best use of CPU.

● It mainly calls dispatcher.

● Fastest among the three (that is why called Short Term).

The dispatcher is responsible for loading the process selected by the Short-

term scheduler on the CPU (Ready to Running State). Context switching is

done by the dispatcher only. A dispatcher does the following work:


● Saving context (process control block) of previously running process

if not finished.

● Switching system mode to user mode.

● Jumping to the proper location in the newly loaded program.

Time taken by the dispatcher is called dispatch latency or process context

switch time.

Short-Term Scheduler

3. Medium-Term Scheduler

Medium Term Scheduler (MTS) is responsible for moving a process from

memory to disk (or swapping).


● It reduces the degree of multiprogramming (Number of processes

present in main memory).

● A running process may become suspended if it makes an I/O

request. A suspended processes cannot make any progress towards

completion. In this condition, to remove the process from memory

and make space for other processes, the suspended process is

moved to the secondary storage. This process is called swapping,

and the process is said to be swapped out or rolled out. Swapping

may be necessary to improve the process mix (of CPU bound and IO

bound)

● When needed, it brings process back into memory and pick up right

where it left off.

● It is faster than long term and slower than short term.


Medium-Term Scheduler

Comparison Among Scheduler

Long Term Short Term Medium Term


Scheduler Schedular Scheduler

It is a process-
It is a job scheduler It is a CPU scheduler
swapping scheduler.

Speed lies in between


The slowest Speed is the fastest
both short and long-
scheduler. among all of them.
term schedulers.

It gives less control


It controls the degree over how much It reduces the degree
of multiprogramming multiprogramming is of multiprogramming.
done.

It is barely present or It is a component of


It is a minimal time-
nonexistent in the systems for time
sharing system.
time-sharing system. sharing.
It can re-enter the
It can re-introduce the
process into memory, It selects those
process into memory
allowing for the processes which are
and execution can be
continuation of ready to execute
continued.
execution.

Thread in Operating System


A thread is a single sequence stream within a process. Threads are also called

lightweight processes as they possess some of the properties of processes.

Each thread belongs to exactly one process.

● In an operating system that supports multithreading, the process

can consist of many threads. But threads can be effective only if the

CPU is more than 1 otherwise two threads have to context switch

for that single CPU.

● All threads belonging to the same process share - code section,

data section, and OS resources (e.g. open files and signals)

● But each thread has its own (thread control block) - thread ID,

program counter, register set, and a stack

● Any operating system process can execute a thread. we can say that

single process can have multiple threads.

Why Do We Need Thread?


● Threads run in concurrent manner that improves the application

performance. Each such thread has its own CPU state and stack, but

they share the address space of the process and the environment.

For example, when we work on Microsoft Word or Google Docs, we

notice that while we are typing, multiple things happen together

(formatting is applied, page is changed and auto save happens).

● Threads can share common data so they do not need to use inter-

process communication. Like the processes, threads also have states

like ready, executing, blocked, etc.

● Priority can be assigned to the threads just like the process, and the

highest priority thread is scheduled first.

● Each thread has its own Thread Control Block (TCB). Like the

process, a context switch occurs for the thread, and register

contents are saved in (TCB). As threads share the same address

space and resources, synchronization is also required for the various

activities of the thread.

Components of Threads

These are the basic components of the Operating System.

● Stack Space: Stores local variables, function calls, and return

addresses specific to the thread.


● Register Set: Hold temporary data and intermediate results for the

thread's execution.

● Program Counter: Tracks the current instruction being executed by

the thread.

Types of Thread in Operating System

Threads are of two types. These are described below.

● User Level Thread

● Kernel Level Thread

Threads
1. User Level Thread

User Level Thread is a type of thread that is not created using system calls.

The kernel has no work in the management of user-level threads. User-level

threads can be easily implemented by the user. In case when user-level

threads are single-handed processes, kernel-level thread manages them. Let's

look at the advantages and disadvantages of User-Level Thread.

Advantages of User-Level Threads

● Implementation of the User-Level Thread is easier than Kernel Level

Thread.

● Context Switch Time is less in User Level Thread.

● User-Level Thread is more efficient than Kernel-Level Thread.

● Because of the presence of only Program Counter, Register Set, and

Stack Space, it has a simple representation.

Disadvantages of User-Level Threads

● The operating system is unaware of user-level threads, so kernel-

level optimizations, like load balancing across CPUs, are not utilized.

● If a user-level thread makes a blocking system call, the entire

process (and all its threads) is blocked, reducing efficiency.


● User-level thread scheduling is managed by the application, which

can become complex and may not be as optimized as kernel-level

scheduling.

2. Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating

system easily. Kernel Level Threads has its own thread table where it keeps

track of the system. The operating System Kernel helps in managing threads.

Kernel Threads have somehow longer context switching time. Kernel helps in

the management of threads.

Advantages of Kernel-Level Threads

● Kernel-level threads can run on multiple processors or cores

simultaneously, enabling better utilization of multicore systems.

● The kernel is aware of all threads, allowing it to manage and

schedule them effectively across available resources.

● Applications that block frequency are to be handled by the Kernel-

Level Threads.

● The kernel can distribute threads across CPUs, ensuring optimal

load balancing and system performance.


Disadvantages of Kernel-Level threads

● Context switching between kernel-level threads is slower compared

to user-level threads because it requires mode switching between

user and kernel space.

● Managing kernel-level threads involves frequent system calls and

kernel interactions, leading to increased CPU overhead.

● A large number of threads may overload the kernel scheduler,

leading to potential performance degradation in systems with many

threads.

● Implementation of this type of thread is a little more complex than a

user-level thread.

For more, refer to the Difference Between User-Level Thread and Kernel-Level

Thread.

Difference Between Process and Thread

The primary difference is that threads within the same process run in a shared

memory space, while processes run in separate memory spaces. Threads are

not independent of one another like processes are, and as a result, threads

share with other threads their code section, data section, and OS resources

(like open files and signals). But, like a process, a thread has its own program

counter (PC), register set, and stack space.


For more, refer to Difference Between Process and Thread.

What is Multi-Threading?

A thread is also known as a lightweight process. The idea is to achieve

parallelism by dividing a process into multiple threads. For example, in a

browser, multiple tabs can be different threads. MS Word uses multiple

threads: one thread to format the text, another thread to process inputs, etc.

More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the

performance and responsiveness of computer systems. Multithreading allows

multiple threads (i.e., lightweight processes) to share the same resources of a

single process, such as the CPU, memory, and I/O devices.

SCHEDULING ALGORITHMS:

Preemptive and Non-Preemptive Scheduling


In operating systems, scheduling is the method by which processes are given

access to the CPU. Efficient scheduling is essential for optimal system

performance and user experience. There are two primary types of CPU

scheduling: preemptive and non-preemptive


Single Threaded vs Multi-threaded Process

Multithreading can be done without OS support, as seen in Java's

multithreading model. In Java, threads are implemented using the Java Virtual

Machine (JVM), which provides its own thread management. These threads,

also called user-level threads, are managed independently of the underlying

operating system.

Application itself manages the creation, scheduling, and execution of threads

without relying on the operating system's kernel. The application contains a

threading library that handles thread creation, scheduling, and context

switching. The operating system is unaware of User-Level threads and treats

the entire process as a single-threaded entity.

Benefits of Thread in Operating System


● Responsiveness: If the process is divided into multiple threads, if

one thread completes its execution, then its output can be

immediately returned.

● Faster context switch: Context switch time between threads is

lower compared to the process context switch. Process context

switching requires more overhead from the CPU.

● Effective utilization of multiprocessor system: If we have multiple

threads in a single process, then we can schedule multiple threads

on multiple processors. This will make process execution faster.

● Resource sharing: Resources like code, data, and files can be shared

among all threads within a process. Note: Stacks and registers can't

be shared among the threads. Each thread has its own stack and

registers.

● Communication: Communication between multiple threads is

easier, as the threads share a common address space. while in the

process we have to follow some specific communication techniques

for communication between the two processes.

● Enhanced throughput of the system: If a process is divided into

multiple threads, and each thread function is considered as one job,

then the number of jobs completed per unit of time is increased,

thus increasing the throughput of the system.


Preemptive Scheduling
The operating system can interrupt or preempt a running process to
allocate CPU time to another process, typically based on priority or time-
sharing policies. Mainly a process is switched from the running state to
the ready state. Algorithms based on preemptive scheduling are Round
Robin (RR) , Shortest Remaining Time First (SRTF) , Priority (preemptive
version) , etc.

In the following example P2 is preempted at time 1 due to arrival of a


higher priority process.

Preemptive Scheduling

Advantages of Preemptive Scheduling

● Because a process may not monopolize the processor, it is a more

reliable method and does not cause denial of service attack.

● Each preemption occurrence prevents the completion of ongoing

tas s.
● The average response time is improved. Utilizing this method in a

multi-programming environment is more advantageous.

● Most of the modern operating systems (Window, Linux and

macOS) implement Preemptive Scheduling.

Disadvantages of Preemptive Scheduling

● More Complex to implement in Operating Systems.

● Suspending the running process, change the context, and

dispatch the new incoming process all take more time.

● Might cause starvation : A low-priority process might be

preempted again and again if multiple high-priority processes

arrive.

● Causes Concurrency Problems as processes can be stopped when

they were accessing shared memory (or variables) or resources.

Non-Preemptive Scheduling
In non-preemptive scheduling, a running process cannot be interrupted by
the operating system; it voluntarily relinquishes control of the CPU. In this
scheduling, once the resources (CPU cycles) are allocated to a process, the
process holds the CPU till it gets terminated or reaches a waiting state.

Algorithms based on non-preemptive scheduling are: First Come First


Serve, Shortest Job First (SJF basically non preemptive) and Priority
(nonpreemptive version) , etc.
Below is the table and Gantt Chart according to the First Come FIrst Serve
(FCFS) Algorithm: We can notice that every process finishes execution
once it gets CPU.

Advantages of Non-Preemptive Scheduling

● It is easy to implement in an operating system. It was used in

Windows 3.11 and early macOS.

● It has a minimal scheduling burden.

● Less computational resources are used.

Disadvantages of Non-Preemptive Scheduling

● It is open to denial of service attack. A malicious process can take

CPU forever.
● Since we cannot implement round robin, the average response

time becomes less.

Differences Between Preemptive and Non-Preemptive


Scheduling
● In Preemptive Scheduling, there is the overhead of switching the

process from the ready state to the running state, vise-verse, and

maintaining the ready queue. Whereas in the case of non-

preemptive scheduling has no overhead of switching the process

from running state to ready state.

● Preemptive scheduling attains flexibility by allowing the critical

processes to access the CPU as they arrive in the ready queue, no

matter what process is executing currently. Non-preemptive

scheduling is called rigid as even if a critical process enters the

ready queue the process running CPU is not disturbed.

● Preemptive Scheduling has to maintain the integrity of shared

data that’s why it is cost associative which is not the case with

Non-preemptive Scheduling.

PREEMPTIVE NON-PREEMPTIVE
Parameter
SCHEDULING SCHEDULING
Once resources(CPU
In this resources(CPU Cycle) are allocated to a
Cycle) are allocated to process, the process
Basic
a process for a limited holds it till it completes
time. its burst time or switches
to waiting state

Process can not be


Process can be
interrupted until it
Interrupt interrupted in
terminates itself or its
between.
time is up

If a process having If a process with a long


high priority frequently burst time is running
Starvation arrives in the ready CPU, then later coming
queue, a low priority process with less CPU
process may starve burst time may starve

It has overheads of
It does not have
Overhead scheduling the
overheads
processes
Flexibility flexible Rigid

Cost Cost associated No cost associated

Non-preemptive
Preemptive scheduling
Response Time scheduling response time
response time is less
is high

Decisions are made by Decisions are made by


the scheduler and are the process itself and the
Decision making
based on priority and OS just follows the
time slice allocation process's instructions

The OS has greater


The OS has less control
control over the
Process control over the scheduling of
scheduling of
processes
processes
Higher overhead due Lower overhead since
Overhead to frequent context context switching is less
switching frequent

More as a process
Concurrency might be preempted Less as a process is never
Overhead when it was accessing preempted.
a shared resource.

Examples of Examples of non-


preemptive scheduling preemptive scheduling
Examples are Round Robin and are First Come First
Shortest Remaining Serve and Shortest Job
Time First First

CPU Scheduling in Operating Systems


CPU scheduling is a process used by the operating system to decide which
task or process gets to use the CPU at a particular time. This is important
because a CPU can only handle one task at a time, but there are usually many
tasks that need to be processed. The following are different purposes of a CPU
scheduling time.
● Maximize the CPU utilization

● Minimize the response and waiting time of the process.

What is the Need for a CPU Scheduling Algorithm?


CPU scheduling is the process of deciding which process will own the CPU to
use while another process is suspended. The main function of CPU scheduling
is to ensure that whenever the CPU remains idle, the OS has at least selected
one of the processes available in the ready-to-use line.

In Multiprogramming, if the long-term scheduler selects multiple I/O binding


processes then most of the time, the CPU remains idle. The function of an
effective program is to improve resource utilization.

Terminologies Used in CPU Scheduling


● Arrival Time: The time at which the process arrives in the ready

queue.

● Completion Time: The time at which the process completes its

execution.

● Burst Time: Time required by a process for CPU execution.

● Turn Around Time: Time Difference between completion time and

arrival time.

Turn Around Time = Completion Time – Arrival Time

● Waiting Time(W.T): Time Difference between turn around time and

burst time.
Waiting Time = Turn Around Time – Burst Time

CPU Scheduling Algorithms


Let us now learn about these CPU scheduling algorithms in operating
systems one by one:

● FCFS - First Come, First Serve

● SJF - Shortest Job First

● SRTF - Shortest Remaining Time First

● Round Robin

● Priority Scheduling
Operating System Handout

Unit IV – CPU Scheduling and Algorithm


Section 4.1 Scheduling types
Scheduling Objectives
 Be Fair while allocating resources to the processes
 Maximize throughput of the system
 Maximize number of users receiving acceptable response times.
 Be predictable
 Balance resource use
 Avoid indefinite postponement
 Enforce Priorities
 Give preference to processes holding key resources
 Give better service to processes that have desirable behaviour patterns

CPU and I/O Burst Cycle:


 Process execution consists of a cycle of CPU execution and I/O wait.
 Processes alternate between these two states.
 Process execution begins with a CPU burst, followed by an I/O burst, then another CPU
burst ... etc
 The last CPU burst will end with a system request to terminate execution rather than
with another I/O burst.
 The duration of these CPU burst have been measured.
 An I/O-bound program would typically have many short CPU bursts, A CPU-bound
program might have a few very long CPU bursts.
 This can help to select an appropriate CPU-scheduling algorithm.

Page 1 of 18
Operating System Handout

Preemptive Scheduling:
 Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
 The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
 That process stays in ready queue till it gets next chance to execute.

Non-Preemptive Scheduling:
 Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to waiting state.
 In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
 In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
 Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.

Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
The resources are allocated to a process, the process holds it till it
Basic
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is
frequently arrives in the ready
Starvation running CPU, then another process with
queue, low priority process may
less CPU burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not
Overhead overheads of scheduling the
have overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.

Scheduling Criteria

 There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.

Page 2 of 18
Operating System Handout

o Turnaround time - Time required for a particular process to complete,


from submission time to completion.
o Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.
o Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that command.

In brief:
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time

4.2 Types of Scheduling Algorithm

(a) First Come First Serve (FCFS)


In FCFS Scheduling
 The process which arrives first in the ready queue is firstly assigned the CPU.
 In case of a tie, process with smaller process id is executed first.
 It is always non-preemptive in nature.
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Advantages-
 It is simple and easy to understand.
 It can be easily implemented using queue data structure.
 It does not lead to starvation.
Disadvantages-
 It does not consider the priority or burst time of the processes.
 It suffers from convoy effect i.e. processes with higher burst time arrived before
the processes with smaller burst time.

Page 3 of 18
Operating System Handout

Example 1:

Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time,
PROCESS ARRIVAL TIME BURST TIME
P1 0 24
P2 0 3
P3 0 3
Gantt chart

P1 P2 P3
0 24 27 30

Page 4 of 18
Operating System Handout

PROCESS WAIT TIME TURN AROUND TIME


P1 0 24
P2 24 27
P3 27 30

Total Wait Time = 0 + 24 + 27 = 51 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms

Total Turn Around Time: 24 + 27 + 30 = 81 ms

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution
in the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart
P1 P2 P3 P4
0 8 12 21 26

PROCESS WAIT TIME TURN AROUND TIME


P1 0 8–0=8
P2 8–1=7 12 – 1 = 11
P3 12 – 2 = 10 21 – 2 = 19
P4 21 – 3 = 18 26 – 3 = 23

Total Wait Time:= 0 + 7 + 10 + 18 = 35 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms

Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms

Throughput: 4 jobs/26 sec = 0.15385 jobs/sec

Page 5 of 18
Operating System Handout

(b) Shortest Job First (SJF)


 Process which have the shortest burst time are scheduled first.
 If two processes have the same bust time, then FCFS is used to break the tie.
 This is a non-pre-emptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
 Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time
First (SRTF).

Advantages-
 SRTF is optimal and guarantees the minimum average waiting time.
 It provides a standard for other algorithms since no other algorithm performs
better than it.

Disadvantages-
 It can not be implemented practically since burst time of the processes can not
be known in advance.
 It leads to starvation for processes with larger burst time.
 Priorities can not be set for the processes.
 Processes with larger burst time have poor response time.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turnaround time.
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Page 6 of 18
Operating System Handout

Process Id Exit time Turn Around time Waiting time


P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
 Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and
average turnaround time.
Solution-
Gantt Chart-

Process Id Exit time Turn Around time Waiting time


P1 4 4–3=1 1–1=0
P2 6 6–1=5 5–4=1
P3 8 8–4=4 4–2=2
P4 16 16 – 0 = 16 16 – 6 = 10
P5 11 11 – 2 = 9 9–3=6

Now,

 Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit


 Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit

Page 7 of 18
Operating System Handout

Example-03:

Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 19 19 – 0 = 19 19 – 7 = 12
P2 13 13 – 1 = 12 12 – 5 = 7
P3 6 6–2=4 4–3=1
P4 4 4–3=1 1–1=0
P5 9 9–4=5 5–2=3
P6 7 7–5=2 2–1=1

Now,
 Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
 Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit

Page 8 of 18
Operating System Handout

Example -04:

Consider the set of 3 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 9
P2 1 4
P3 2 9

If the CPU scheduling policy is SRTF, calculate the average waiting time and average
turn around time.

Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 13 13 – 0 = 13 13 – 9 = 4
P2 5 5–1=4 4–4=0
P3 22 22- 2 = 20 20 – 9 = 11

Now,
 Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
 Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit

Example-05:

Consider the set of 4 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 20
P2 15 25
P3 30 10
P4 45 15

Page 9 of 18
Operating System Handout

If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.

Solution-

Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Thus,
 Turn Around Time of process P2 = 55 – 15 = 40 unit
 Waiting time of process P2 = 40 – 25 = 15 unit

(c) Round Robin Scheduling


 CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
 This fixed amount of time is called as time quantum or time slice.
 After the time quantum expires, the running process is preempted and sent to the
ready queue.
 Then, the processor is assigned to the next arrived process.
 It is always preemptive in nature.

Page 10 of 18
Operating System Handout

Advantages-

 It gives the best performance in terms of average response time.


 It is best suited for time sharing system, client server architecture and
interactive system.

Disadvantages-

 It leads to starvation for processes with larger burst time as they have to repeat
the cycle many times.
 Its performance heavily depends on time quantum.
 Priorities can not be set for the processes.

With decreasing value of time quantum,


 Number of context switch increases
 Response time decreases
 Chances of starvation decreases

Thus, smaller value of time quantum is better in terms of response time.

With increasing value of time quantum,


 Number of context switch decreases
 Response time increases
 Chances of starvation increases

Thus, higher value of time quantum is better in terms of number of context switch.

 With increasing value of time quantum, Round Robin Scheduling tends to


become FCFS Scheduling.
 When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
 The performance of Round Robin scheduling heavily depends on the value of
time quantum.
 The value of time quantum should be such that it is neither too big nor too
small.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3

Page 11 of 18
Operating System Handout

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate
the average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Now,
 Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
 Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 0 4

P2 1 5

P3 2 2

P4 3 1

P5 4 6

P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Page 12 of 18
Operating System Handout

Process Id Exit time Turn Around time Waiting time


P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10
Now,
 Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
 Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Problem-03: Consider the set of 6 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time
P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the
average waiting time and average turnaround time.
Solution-
Ready Queue- P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4
Gantt chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12

Page 13 of 18
Operating System Handout

Now,

 Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit


 Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

(d) Priority Scheduling


 Out of all the available processes, CPU is assigned to the process having the
highest priority.
 In case of a tie, it is broken by FCFS Scheduling.
 Priority Scheduling can be used in both preemptive and non-preemptive mode.

 The waiting time for the process having the highest priority will always be zero in
preemptive mode.
 The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
 The arrival time of all the processes is same
 All the processes become available
Advantages-
 It considers the priority of the processes and allows the important processes to
run first.
 Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
 Processes with lesser priority may starve for CPU.
 There is no idea of response time and waiting time.

Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3

P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)

Page 14 of 18
Operating System Handout

Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
Now,
 Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
 Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit

Problem-02: Consider the set of 5 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting
time and average turn around time. (Higher number represents higher priority).
Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4

Page 15 of 18
Operating System Handout

Now,
 Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
 Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit

(d) Multilevel Queue Scheduling


A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each queue has
its own scheduling algorithm.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the ready
queue while a batch process was running, the batch process will be pre-empted.

4.3 Deadlock
 Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
 For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

Page 16 of 18
Operating System Handout

Deadlock can arise if following four necessary conditions hold simultaneously.


1. Mutual Exclusion: One or more than one resource are non-sharable means Only one
process can use at a time.
2. Hold and Wait: A process is holding at least one resource and waiting for another
resources.
3. No Pre-emption: A resource cannot be taken from a process unless the process releases
the resource means the process which once scheduled will be executed till the
completion and no other process can be scheduled by the scheduler meanwhile.
4. Circular Wait: A set of processes are waiting for each other in circular form means
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
Difference between Starvation and Deadlock
Sr. Deadlock Starvation

Starvation is a situation where the low


Deadlock is a situation where no process got
1 priority process got blocked and the high
blocked and no process proceeds
priority processes proceed.

2 Deadlock is an infinite waiting. Starvation is a long waiting but not infinite.

3 Every Deadlock is always a starvation. Every starvation need not be deadlock.

The requested resource is blocked by the other The requested resource is continuously be
4
process. used by the higher priority processes.

Deadlock happens when Mutual exclusion, hold


It occurs due to the uncontrolled priority and
5 and wait, No preemption and circular wait
resource management.
occurs simultaneously.

Deadlock Handling
The various strategies for handling deadlock are-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
1. Deadlock Prevention
 Deadlocks can be prevented by preventing at least one of the four required
conditions:
Mutual Exclusion
 Shared resources such as read-only files do not lead to deadlocks.
 Unfortunately, some resources, such as printers and tape drives, require exclusive
access by a single process.
Hold and Wait
 To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others.

Page 17 of 18
Operating System Handout

No Preemption
 Preemption of process resource allocations can prevent this condition of deadlocks,
when it is possible.
Circular Wait
 One way to avoid circular wait is to number all resources, and to require that processes
request resources only in strictly increasing ( or decreasing ) order.
2. Deadlock Avoidance
 In deadlock avoidance, the operating system checks whether the system is in safe state
or in unsafe state at every step which the operating system performs.
 The process continues until the system is in safe state.
 Once the system moves to unsafe state, the OS has to backtrack one step.
 In simple words, The OS reviews each allocation so that the allocation doesn't cause
the deadlock in the system.

3. Deadlock detection and recovery


 This strategy involves waiting until a deadlock occurs.
 After deadlock occurs, the system state is recovered.
 The main challenge with this approach is detecting the deadlock.

4. Deadlock Ignorance
 This strategy involves ignoring the concept of deadlock and assuming as if it does not
exist.
 This strategy helps to avoid the extra overhead of handling deadlock.
 Windows and Linux use this strategy and it is the most widely used method.

Page 18 of 18

You might also like