OS Unit 2
OS Unit 2
allocated to it.
● Deadlock Handling : Making sure that the system does not reach a
processes to communicate.
Process Operations
Please remember a process goes through different states before termination
and these state changes require different operations on processes by an
operating system. These operations include process creation, process
scheduling, execution and killing the process. Here are the key process
operations:
PROCESS:
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −
1 Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
1. Running: This means the process is actively using the CPU to do its
work.
2. Not Running: This means the process is not currently using the
When a new process is created, it starts in the not running state. Initially, this
process is kept in a program called the dispatcher.
1. Not Running State: When the process is first created, it is not using
the CPU.
for use).
3. Moving to Running State: If the CPU is free, the dispatcher lets the
process use the CPU, and it moves into the running state.
started running yet. It has not been loaded into the main memory,
but its process control block (PCB) has been created, which holds
chance to execute.
● Running: This state means the process is currently being executed
by the CPU. Since we’re assuming there is only one CPU, at any
executing right now. It is waiting for some event to happen, like the
from a disk).
has been stopped by the user for some reason. At this point, it is
moves to the ready state when the operating system has allocated
for occurs, the process moves to the ready state. For example, if a
process was waiting for user input and the input is provided, it
state.
Types of Schedulers
● Long-Term Scheduler: Decides how many processes should be
long-term scheduler.
● Short-Term Scheduler: Short-term scheduler will decide which
multiprogramming.
Multiprogramming
We have many processes ready to run. There are two types of
multiprogramming:
execution, till the CPU releases the control by itself, control cannot
● An Interrupt occurs.
A mode switch occurs when the CPU privilege level is changed, for example
when a system call is made or a fault occurs. The kernel works in more a
privileged mode than a standard user task. If a user process wants to access
things that are only accessible to the kernel, a mode switch must occur. The
currently executing process need not be changed during a mode switch. A
mode switch typically occurs for a process context switch to occur. Only the
kernel can cause a context switch.
1 Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to
be executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
9 Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.
running at a time.
● One of the key responsibilities of an Operating System (OS) is to
executed by the CPU. In simpler terms, they manage how the CPU
Process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
moves between various scheduling queues, such as the ready queue, waiting
Categories of Scheduling
Scheduling falls into one of two categories:
state to ready state. This switching happens because the CPU may
give other processes priority and substitute the currently active
Long Term Scheduler loads a process from disk to main memory for execution.
time.
use much of their time in input and output operations while CPU-
bound processes are which spend their time on the CPU. The job
two.
● In some systems, the long-term scheduler might not even exist. For
CPU Scheduler is responsible for selecting one process from the ready state
● STS (Short Term Scheduler) must select a new process for the CPU
The dispatcher is responsible for loading the process selected by the Short-
if not finished.
switch time.
Short-Term Scheduler
3. Medium-Term Scheduler
may be necessary to improve the process mix (of CPU bound and IO
bound)
● When needed, it brings process back into memory and pick up right
It is a process-
It is a job scheduler It is a CPU scheduler
swapping scheduler.
can consist of many threads. But threads can be effective only if the
● But each thread has its own (thread control block) - thread ID,
● Any operating system process can execute a thread. we can say that
performance. Each such thread has its own CPU state and stack, but
they share the address space of the process and the environment.
● Threads can share common data so they do not need to use inter-
● Priority can be assigned to the threads just like the process, and the
● Each thread has its own Thread Control Block (TCB). Like the
Components of Threads
thread's execution.
the thread.
Threads
1. User Level Thread
User Level Thread is a type of thread that is not created using system calls.
Thread.
level optimizations, like load balancing across CPUs, are not utilized.
scheduling.
A kernel Level Thread is a type of thread that can recognize the Operating
system easily. Kernel Level Threads has its own thread table where it keeps
track of the system. The operating System Kernel helps in managing threads.
Kernel Threads have somehow longer context switching time. Kernel helps in
Level Threads.
threads.
user-level thread.
For more, refer to the Difference Between User-Level Thread and Kernel-Level
Thread.
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces. Threads are
not independent of one another like processes are, and as a result, threads
share with other threads their code section, data section, and OS resources
(like open files and signals). But, like a process, a thread has its own program
What is Multi-Threading?
threads: one thread to format the text, another thread to process inputs, etc.
SCHEDULING ALGORITHMS:
performance and user experience. There are two primary types of CPU
multithreading model. In Java, threads are implemented using the Java Virtual
Machine (JVM), which provides its own thread management. These threads,
operating system.
immediately returned.
● Resource sharing: Resources like code, data, and files can be shared
among all threads within a process. Note: Stacks and registers can't
be shared among the threads. Each thread has its own stack and
registers.
Preemptive Scheduling
tas s.
● The average response time is improved. Utilizing this method in a
arrive.
Non-Preemptive Scheduling
In non-preemptive scheduling, a running process cannot be interrupted by
the operating system; it voluntarily relinquishes control of the CPU. In this
scheduling, once the resources (CPU cycles) are allocated to a process, the
process holds the CPU till it gets terminated or reaches a waiting state.
CPU forever.
● Since we cannot implement round robin, the average response
process from the ready state to the running state, vise-verse, and
data that’s why it is cost associative which is not the case with
Non-preemptive Scheduling.
PREEMPTIVE NON-PREEMPTIVE
Parameter
SCHEDULING SCHEDULING
Once resources(CPU
In this resources(CPU Cycle) are allocated to a
Cycle) are allocated to process, the process
Basic
a process for a limited holds it till it completes
time. its burst time or switches
to waiting state
It has overheads of
It does not have
Overhead scheduling the
overheads
processes
Flexibility flexible Rigid
Non-preemptive
Preemptive scheduling
Response Time scheduling response time
response time is less
is high
More as a process
Concurrency might be preempted Less as a process is never
Overhead when it was accessing preempted.
a shared resource.
queue.
execution.
arrival time.
burst time.
Waiting Time = Turn Around Time – Burst Time
● Round Robin
● Priority Scheduling
Operating System Handout
Page 1 of 18
Operating System Handout
Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
That process stays in ready queue till it gets next chance to execute.
Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to waiting state.
In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.
Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
The resources are allocated to a process, the process holds it till it
Basic
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is
frequently arrives in the ready
Starvation running CPU, then another process with
queue, low priority process may
less CPU burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not
Overhead overheads of scheduling the
have overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.
Scheduling Criteria
There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.
Page 2 of 18
Operating System Handout
In brief:
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
Advantages-
It is simple and easy to understand.
It can be easily implemented using queue data structure.
It does not lead to starvation.
Disadvantages-
It does not consider the priority or burst time of the processes.
It suffers from convoy effect i.e. processes with higher burst time arrived before
the processes with smaller burst time.
Page 3 of 18
Operating System Handout
Example 1:
Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time,
PROCESS ARRIVAL TIME BURST TIME
P1 0 24
P2 0 3
P3 0 3
Gantt chart
P1 P2 P3
0 24 27 30
Page 4 of 18
Operating System Handout
Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution
in the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt chart
P1 P2 P3 P4
0 8 12 21 26
Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms
Page 5 of 18
Operating System Handout
Advantages-
SRTF is optimal and guarantees the minimum average waiting time.
It provides a standard for other algorithms since no other algorithm performs
better than it.
Disadvantages-
It can not be implemented practically since burst time of the processes can not
be known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turnaround time.
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Page 6 of 18
Operating System Handout
Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and
average turnaround time.
Solution-
Gantt Chart-
Now,
Page 7 of 18
Operating System Handout
Example-03:
Consider the set of 6 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Now,
Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit
Page 8 of 18
Operating System Handout
Example -04:
Consider the set of 3 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SRTF, calculate the average waiting time and average
turn around time.
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Now,
Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit
Example-05:
Consider the set of 4 processes whose arrival time and burst time are given below-
Page 9 of 18
Operating System Handout
If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Thus,
Turn Around Time of process P2 = 55 – 15 = 40 unit
Waiting time of process P2 = 40 – 25 = 15 unit
Page 10 of 18
Operating System Handout
Advantages-
Disadvantages-
It leads to starvation for processes with larger burst time as they have to repeat
the cycle many times.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
Thus, higher value of time quantum is better in terms of number of context switch.
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Page 11 of 18
Operating System Handout
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate
the average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Now,
Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Page 12 of 18
Operating System Handout
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12
Page 13 of 18
Operating System Handout
Now,
The waiting time for the process having the highest priority will always be zero in
preemptive mode.
The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
The arrival time of all the processes is same
All the processes become available
Advantages-
It considers the priority of the processes and allows the important processes to
run first.
Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
Processes with lesser priority may starve for CPU.
There is no idea of response time and waiting time.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)
Page 14 of 18
Operating System Handout
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
Now,
Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit
Problem-02: Consider the set of 5 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting
time and average turn around time. (Higher number represents higher priority).
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
Page 15 of 18
Operating System Handout
Now,
Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
4.3 Deadlock
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Page 16 of 18
Operating System Handout
The requested resource is blocked by the other The requested resource is continuously be
4
process. used by the higher priority processes.
Deadlock Handling
The various strategies for handling deadlock are-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
1. Deadlock Prevention
Deadlocks can be prevented by preventing at least one of the four required
conditions:
Mutual Exclusion
Shared resources such as read-only files do not lead to deadlocks.
Unfortunately, some resources, such as printers and tape drives, require exclusive
access by a single process.
Hold and Wait
To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others.
Page 17 of 18
Operating System Handout
No Preemption
Preemption of process resource allocations can prevent this condition of deadlocks,
when it is possible.
Circular Wait
One way to avoid circular wait is to number all resources, and to require that processes
request resources only in strictly increasing ( or decreasing ) order.
2. Deadlock Avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state
or in unsafe state at every step which the operating system performs.
The process continues until the system is in safe state.
Once the system moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause
the deadlock in the system.
4. Deadlock Ignorance
This strategy involves ignoring the concept of deadlock and assuming as if it does not
exist.
This strategy helps to avoid the extra overhead of handling deadlock.
Windows and Linux use this strategy and it is the most widely used method.
Page 18 of 18