Chapter 4 OSY Notes
Chapter 4 OSY Notes
• In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The idea is relatively simple. A process is executed until it
must wait, typically for the completion of some I/O request.
• In a simple computer system, the CPU then just sits idle. All this waiting time is wasted;
no useful work is accomplished. With multiprogramming, we try to use this time
productively. Several processes are kept in memory at one time.
• When one process has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process. This pattern continues. Every time one
process has to wait, another process can take over use of the CPU.
• Scheduling of this kind is a fundamental operating-system function. Almost all
computer resources are scheduled before use. The CPU is, of course, one of the primary
computer resources. Thus, its scheduling is central to operating-system design.
CPU Burst : It is the amount of time required by a process or can be said the amount of time
required by the process to finish. We cannot estimate the time taken by the process before
running it. So most of the problem is related to the burst time.
Processes alternate between these two states. Process execution begins with a CPU burst. That
is followed by an I/O burst, which is followed by another CPU burst, then another I/0 burst,
and so on. Eventually, the final CPU burst ends with a system request to terminate execution.
_----------------------------------------------------------------------------------------------_----------
Preemptive Scheduling
Preemptive Scheduling means once a process started its execution, the currently running
process can be paused for a short period of time to handle some other process of higher priority,
it means we can preempt the control of CPU from one process to another if required.
A computer system implementing this supports multi-tasking as it gives the user impression
that the user can work on multiple processes.
It is practical because if some process of higher priority is encountered then the current process
can be paused to handle that process.
_--------------------------------------------------------------------------------------------------------------
Non-Preemptive Scheduling
Non-Preemptive Scheduling means once a process starts its execution or the CPU is processing
a specific process it cannot be halted or in other words we cannot preempt (take control) the
CPU to some other process.
A computer system implementing this cannot support the execution of process in a multi task
fashion. It executes all the processes in a sequential manner.
It is not practical as all processes are not of same priority and are not always known to the
system in advance.
CPU utilization is more compared to Non- CPU utilization is less compared to Preemptive
Preemptive Scheduling. Scheduling.
Waiting time and Response time is less. Waiting time and Response time is more.
The preemptive scheduling is prioritized. The When a process enters the state of running, the
highest priority process should always be the state of that process is not deleted from the
process that is currently utilized. scheduler until it finishes its service time.
If a high priority process frequently arrives in If a process with long burst time is running
the ready queue, low priority process may CPU, then another process with less CPU burst
starve. time may starve.
Ex:- SRTF, Priority, Round Robin, etc. Ex:- FCFS, SJF, Priority, etc.
_---------------------------------------------------------------------------------------------------------------
Scheduling criteria:
Different CPU-scheduling algorithms have different properties, and the choiceof a particular
algorithm may favor one class of processes over another. Inchoosing which algorithm to use in
a particular situation, we must considerthe properties of the various algorithms.
Many criteria have been suggested for comparing CPU-scheduling algorithms.
The criteria include thefollowing:
Throughput:
• If the CPU is busy executing processes, then work is being done. One measure of work
is the number of processes that are completed per time unit, called throughput.
• For long processes, this rate may be one process per hour; for short transactions, it may
be ten processes per second.
Turnaround time:
• From the point of view of a particular process, the important criterion is how long it
takes to execute that process.
• The interval from the time of submission of a process to the time of completion is the
turnaround time.
• Turnaround time is the sum of the periods spent waiting to get into memory, waiting in
the ready queue, executing on the CPU, and doing I/0.
Waiting time:
• The CPU-scheduling algorithm does not affect the amount of time during which a
process executes or does I/0; it affects only the amount of time that a process spends
waiting in the ready queue.
• Waiting time is the sum of the periods spent waiting in the ready queue.
Response time:
• In an interactive system, turnaround time may not be the best criterion.
• Often, a process can produce some output fairly early and can continue computing new
results while previous results are being output to the user.
• Thus, another measure is the time from the submission of a request until the first
response is produced.
• This measure, called response time, is the time it takes to start responding, not the time
it takes to output the response.
• The turnaround time is generally limited by the speed of the output device.
FCFS Scheduling-
In FCFS Scheduling,
• The process which arrives first in the ready queue is firstly assigned the CPU.
• In case of a tie, process with smaller process id is executed first.
• The implementation of the FCFS policy is easily managed with a FIFO queue.
• When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue.
Advantages-
Disadvantages-
Convoy Effect
In convoy effect,
• Consider processes with higher burst time arrived before the processes with
smaller burst time.
• Then, smaller processes have to wait for a long time for longer processes to release
the CPU.
Example:
Turnaround time of
P1 = 5 - 0 = 5ms
P2 = 29 - 0 = 29ms
P3 = 45 - 0 = 45ms
P4 = 55 - 0 = 55ms
P5 = 58 - 0 = 58ms
Waiting time of
P1 = 5 – 5 = 0 ms
P2 = 26 - 24 = 5 ms
P3 = 45 – 16 = 29 ms
P4 = 55 - 10 = 45 ms
P5 = 58 - 3 = 55 ms
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Throughput
Here, we have a total of five processes. Process P1, P2, P3, P4, and P5 takes 5ms, 24ms, 16ms,
10ms, and 3ms to execute respectively.
_---------------------------------------------------------------------------------------------------------------
SJF Scheduling-
In SJF Scheduling,
• Out of all the available processes, CPU is assigned to the process having smallest burst
time.
• In case of a tie, it is broken by FCFS Scheduling.
Advantages-
Disadvantages-
• It cannot be implemented practically since burst time of the processes cannot be known
in advance.
• It leads to starvation for processes with larger burst time.
• Priorities cannot be set for the processes.
• Processes with larger burst time have poor response time.
Example:
SJF (Shortest Job First) Scheduling
Turnaround Time of
P1 = 8 - 0 = 8ms
P2 = 58 - 0 = 58ms
P3 = 34 - 0 = 34ms
P4 = 18 - 0 = 18ms
P5 = 3 - 0 = 3ms
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Therefore, Average Waiting Time = (3 + 34 + 18 + 8 + 0) / 5 = 12.6ms
10 | P a g e Sushama Pawar
Throughput
Here, we have a total of five processes. Process P1, P2, P3, P4, and P5 takes 5ms, 24ms,
16ms, 10ms, and 3ms to execute respectively. Therefore, Throughput will be same as
above problem i.e., 11.6ms for each process.
_---------------------------------------------------------------------------------------------------------------
Turnaround Time of
P1 = 3 - 0 = 3ms
P2 = 15 - 2 = 13ms
P3 = 8 - 4 = 4ms
P4 = 20 - 6 = 14ms
P5 = 10 - 8 = 2ms
11 | P a g e Sushama Pawar
Average Waiting Time
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Therefore, Average Waiting Time = (0 + 7 + 0 + 9 + 0) / 5 = 3.2ms
Throughput
Troughput = (3 + 6 + 4 + 5 + 2) / 5 = 4ms
_--------------------------------------------------------------------------------------------------------------
Priority Scheduling-
In Priority Scheduling,
• Out of all the available processes, CPU is assigned to the process having the highest
priority.
• In case of a tie, it is broken by FCFS Scheduling.
Advantages-
12 | P a g e Sushama Pawar
• It considers the priority of the processes and allows the important processes to run first.
• Priority scheduling in preemptive mode is best suited for real time operating system.
Disadvantages-
Important Notes-
Note-01:
• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
Note-02:
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available
13 | P a g e Sushama Pawar
Average Turnaround Time
Turnaround Time of
P1 = 9 - 0 = 9ms
P2 = 25 - 0 = 25ms
P3 = 26- 0 = 26ms
P4 = 3 - 0 = 3ms
P5 = 13 - 0 = 13ms
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
14 | P a g e Sushama Pawar
Therefore, Average Turnaround Time = (3 + 15 + 25 + 0 + 9) / 5 = 10ms
Throughput
Throughput = (6 + 12 + 1 + 3 + 4) / 5 = 5.2ms
_----------------------------------------------------------------------------------------------------------------------------- ------
Gantt Chart
P1 P2 P3 P3 P4 P2 P1
0 1 2 3 4 5 8 12
Turnaround Time of
P1 = 12 - 0 = 12ms
P2 = 8 - 1 = 7ms
P3 = 4- 2 = 2ms
P4 = 5 - 4 = 1ms
15 | P a g e Sushama Pawar
Waiting Time = Turnaround Time – Burst Time
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Throughput
_--------------------------------------------------------------------------------------------------------------
Round Robin Scheduling-
In Round Robin Scheduling,
• CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
• This fixed amount of time is called as time quantum or time slice.
• After the time quantum expires, the running process is preempted and sent to the ready
queue.
• Then, the processor is assigned to the next arrived process.
• It is always preemptive in nature.
Advantages-
• It gives the best performance in terms of average response time.
• It is best suited for time sharing system, client server architecture and interactive
system.
Disadvantages-
16 | P a g e Sushama Pawar
• It leads to starvation for processes with larger burst time as they have to repeat the cycle
many times.
• Its performance heavily depends on time quantum.
• Priorities cannot be set for the processes.
Important Notes-
Note-01:
Note-02:
Thus, higher value of time quantum is better in terms of number of context switch.
Note-03:
• With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
• When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
Note-04:
17 | P a g e Sushama Pawar
• The performance of Round Robin scheduling heavily depends on the value of time
quantum.
• The value of time quantum should be such that it is neither too big nor too small.
Here is the Round Robin scheduling example with gantt chart. Time Quantum is 5ms.
Turnaround Time of
P1 = 44 - 0 = 44ms
P2 = 21 - 0 = 21ms
P3 = 24- 0 = 24ms
18 | P a g e Sushama Pawar
Waiting Time for
P1 = 44 - 30 = 14ms
P2 = 21 - 6 = 15ms
P3 = 24– 8 = 16ms
Average Waiting Time = Waiting Time of all Processes / Total Number of Process
Throughput
_---------------------------------------------------------------------------------------------------------------
Another class of scheduling algorithms has been created for situations in which processes are
easily classified into different groups. For example, a common division is made between
foreground (interactive) processes and background (batch) processes. These two types of
processes have different response-time requirements and so may have different scheduling
needs. In addition, foreground processes may have priority (externally defined) over
background processes.
A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues (refer Figure). The processes are permanently assigned to one queue, generally based
on some property of the process, such as memory size, process priority, or process type. Each
queue has its own scheduling algorithm. For example, separate queues might be used for
foreground and background processes. The foreground queue might be scheduled by an RR
algorithm, while the background queue is scheduled by an FCFS algorithm.
19 | P a g e Sushama Pawar
In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling. For example, the foreground queue may have absolute
priority over the background queue.
Let's look at an example of a multilevel queue scheduling algorithm with five queues, listed
below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the ready
queue while a batch process was running, the batch process would be preempted.
4.3 Deadlock:
20 | P a g e Sushama Pawar
A process requests resources; if the resources are not available at that time, the process enters
a waiting state. Sometimes, a waiting process is never again able to change state, because the
resources it has requested are held by other waiting processes. This situation is called a
deadlock.
System Model:
A process must request a resource before using it and must release the resource after using it.
A process may request as many resources as it requires carrying out its designated task.
Obviously, the number of resources requested may not exceed the total number of resources
available in the system. In other words, a process cannot request three printers if the system
has only two.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
Request: The process requests the resource. If the request cannot be granted immediately (for
example, if the resource is being used by another process), then the requesting process must
wait until it can acquire the resource.
Use: The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneouslyin a system:
Mutual exclusion: At least one resource must be held in a non-sharablemode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayeduntil the resource has been released.
Hold and wait: A process must be holding at least one resource andwaiting to acquire additional
resources that are currently being held byother processes.
21 | P a g e Sushama Pawar
No preemption: Resources cannot be preempted; that is, a resource canbe released only
voluntarily by the process holding it, after that processhas completed its task.
Circular wait: A set { P0 , Pl, ... , P11 } of waiting processes must exist suchthat Po is waiting
for a resource held by P1, P1 is waiting for a resourceheld by P2, ... , Pn-1 is waiting for a
resource held by P,v and P11 is waitingfor a resource held by Po.
To ensure that deadlocks never occur, the system can use either a deadlock-prevention or a
deadlock-avoidance scheme. Deadlock prevention provides a set of methods for ensuring that
at least one of the necessary conditions cannot hold. These methods prevent deadlocks by
constraining how requests for resources can be made.
Deadlock avoidance requires that the operating system be given in advance additional
information concerning which resources a process will request and use during its lifetime. With
this additional knowledge, it can decide for each request whether or not the process should
wait. To decide whether the current request can be satisfied or must be delayed, the system
must consider the resources currently available, the resources currently allocated to each
process, and the future requests and releases of each process.
Deadlock Prevention:
1. Eliminate Mutual Exclusion:
b) The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
3. Eliminate No Preemption:
a) One protocol is ‘If a process that is holding some resources requests another
resources and that resources cannot be allocated to it, then it must release all resources
that are currently allocated to it.
b) Another protocol is "When a process requests some resources, if they are available,
allocate them. If a resource it requested is not available, then we check whether it is
being used or it is allocated to some other process waiting for other resources. If that
resource is not being used, then the OS preempts it from the waiting process and
allocate it to the requesting process. If that resource is used, the requesting process must
wait." This protocol can be applied to resources whose states can easily be saved and
restored (registers, memory space). It cannot be applied to resources like printers.
23 | P a g e Sushama Pawar
4. Circular Wait:
To avoid circular wait, resources may be ordered and we can ensure that each process
can request resources only in an increasing order of these numbers. The algorithm may
itself increase complexity and may also lead to poor resource utilization.
For example, set priorities for rl = 1, r2 = 2, r3 = 3, and r4 = 4. With these priorities, if
process P wants to use rl and r3, it should first request rl, then T3. Another protocol is
"Whenever a process requests a resource rj, it must have released all resources rk with
priority (rk) z priority (rj).
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum
number of resources of each type it may ever need. The Deadlock avoidance algorithm
examines the resource allocations so that there can never be a circular wait condition.
Banker’s Algorithm
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
• It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
24 | P a g e Sushama Pawar
• Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
• It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
• Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
• It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
• Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource
type Rj
Need :
• It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each
process.
• Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
for its execution.
• Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocation specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
25 | P a g e Sushama Pawar
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the
following actions are taken:
1) If Requesti<= Needi
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its
maximum claim.
2) If Requesti<= Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process Pi by
modifying the state asfollows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti
Examples:
26 | P a g e Sushama Pawar
27 | P a g e Sushama Pawar
28 | P a g e Sushama Pawar
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of a directed graph calleda Resource-
Allocation Graph. This graph consists of a set of vertices Vand a set of edges E. The set of
vertices Vis partitioned into two different typesof nodes: P == { P1, P2, ... , Pn}, the set
consisting of all the active processes in thesystem, and R == {R1, R2, ... , RmL the set
consisting of all resource types in thesystem.
The resource-allocation graph shown in Figure 7.2 depicts the following situation.
The sets P, K and E:
P = {P1, P2, P3}
R= {R1, R2, R3,R4}
E = {P1-> R1, P2-> R3, R1->P2, R2-> P2, R2-> P1, R3-> P3}
Resource instances:
Given the definition of a resource-allocation graph, it can be shown that, if the graph contains
no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then
a deadlock may exist.
29 | P a g e Sushama Pawar
P1 → R 1 →P2 → R3 →P3 → R2 →
P1
P2 → R3 → P3 →R2 →P2
30 | P a g e Sushama Pawar