Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views36 pages

OS Unit 2 Process Managemant

The document provides an overview of process management in operating systems, detailing the definition of a process, its states, and the role of the Process Control Block (PCB). It discusses various scheduling algorithms, including Long Term, Short Term, and Medium Term schedulers, as well as specific algorithms like First Come First Serve, Shortest Job First, Priority Scheduling, and Round Robin. Additionally, it outlines the criteria for effective scheduling and the differences between preemptive and non-preemptive scheduling methods.

Uploaded by

shreyassupe346
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views36 pages

OS Unit 2 Process Managemant

The document provides an overview of process management in operating systems, detailing the definition of a process, its states, and the role of the Process Control Block (PCB). It discusses various scheduling algorithms, including Long Term, Short Term, and Medium Term schedulers, as well as specific algorithms like First Come First Serve, Shortest Job First, Priority Scheduling, and Round Robin. Additionally, it outlines the criteria for effective scheduling and the differences between preemptive and non-preemptive scheduling methods.

Uploaded by

shreyassupe346
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Process Management

Process

• A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
• Process State
• When process executes, it changes state. Process state is defined as the current activity of the
process.
• Process state contains five states. Each process is in one of the states.
Process Transition diagram

1. New: A process that just been created.

2. Ready: Ready processes are waiting to have the processor allocated to them by the
operating system so that they can run.

3. Running: The process that is currently being executed. A running process possesses all
the resources needed for its execution, including the processor.

4. Waiting: A process that cannot execute until some event occurs such as the completion of
an I/O operation. The running process may become suspended by invoking an I/O
module.

5. Terminated: A process that has been released from the pool of executable processes by
the operating system.

Whenever processes changes state, the operating system reacts and updates the process
state in PCB (Process Control Block). Only one process can be running on any processor at
any instant and many processes may be ready and waiting state.
Process Control Block (PCB)
• Each process contains the process control block (PCB).
• PCB is the data structure used by the operating system. Operating system groups all
information that needs about particular process.
• Pointer: Pointer points to another process control block. Pointer is used for maintaining
the scheduling list.
PCB
• Process State: Process state may be new, ready, running, waiting and so on.

• Program Counter: It indicates the address of the next instruction to be


executed for this process.

• Event information: For a process in the blocked state this field contains
information concerning the event for which the process is waiting.

• CPU register: A processor register (CPU register) is one of a small set of data
holding places that are part of the computer processor. A register may hold an
instruction, a storage address, or any kind of data
PCB
• Memory Management Information: This information may include
the value of base and limit register. This information is useful for
deallocating the memory when the process terminates.

• Accounting Information: This information includes the amount of


CPU used, time limits, job or process numbers, account numbers etc.
• Process number – Every process is assigned with a unique id known as process
ID or PID
PCB

• Process control block also includes the information about CPU scheduling,
I/O resource management, file management information, priority and so on.
The PCB simply serves as the repository for any information that may vary
from process to process.
Process Scheduling
• The objective of multiprogramming is to have some processes in main memory for execution and
CPU switches among these processes, so as to maximize CPU utilization.

• This needs process scheduling.

• Scheduling queues:

• job queue: As process enters the system, they are put into a job queue. This queue consists
of all processes in the system.

• ready queue: the processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue.

• Device queue: The list of processes waiting for a particular device is called device queue.
Each device has its own queue.
Queuing diagram
• A newly arrived process is put in the ready queue.
Processes are waiting in ready queue for allocating
the CPU.

• Once the CPU is assigned to the process, then


process will execute. While executing the process,
one of the several events could occur.

1. The process could issue an I/O request and


then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and
put back in the ready queue.
Schedulers

 Schedulers are special system software’s which handles process scheduling in


various ways.
 Their main task is to select the jobs to be submitted into the system and to
decide which process to run.

 Schedulers are of three types


 Long Term Scheduler
 Short Term Scheduler
 Medium Term Scheduler
Long Term Scheduler
Long Term Scheduler
 It is also called job scheduler.

 Job scheduler selects processes from the queue and loads them into memory for

execution.

 The long term scheduler admits more jobs when the resource utilization is low, and blocks

the incoming jobs from entering the ready queue when utilization is too high.

 So long term scheduler controls the degree of multiprogramming.


Short term scheduler
 It is also called CPU scheduler.

 CPU scheduler selects process among the processes


that are ready to execute and allocates CPU to one
of them.

 Short term scheduler also known as dispatcher,


execute most frequently and makes the decision of
which process to execute next.

 Short term scheduler is faster than long term


scheduler.
Difference between Long-Term and Short-Term Scheduler
Long Term Scheduler Short Term Scheduler

Long term scheduler takes jobs Short term scheduler takes process
from a job pool and manage them. from a ready queue and context
switch the CPU.
Long term scheduler is known as Short term scheduler is known as
JOB Scheduler. CPU Scheduler.
A queue of job is maintained as a No such queue is present.
job pool. Scheduler decides which
job to pick.
Long term scheduler controls Short term scheduler controls
Multiprogramming. multitasking.
Long term scheduler priorities the Short term scheduler set the
program to be selected for importance based on operation
processing based on provided type.
mechanism.
Medium Term Scheduler
• Medium-term scheduling is a part of swapping.

• A suspended processes(for I/O request) cannot make any


progress towards completion.

• In this condition, to remove the process from memory and


make space for other processes

• The suspended process is moved to the secondary storage.

• This process is called swapping, and the process is said to


be swapped out or rolled out.

• It reduces the degree of multiprogramming.

• The medium-term scheduler is in-charge of handling the


swapped out-processes.
CPU Scheduler
CPU Scheduler
 Whenever the CPU becomes idle, it is the job of the CPU Scheduler (short-term scheduler) to select
another process from the ready queue to run next.

 Schedulers fell into one of the two general categories:

 Non preemptive scheduling:

 Once the resources (CPU cycles) is allocated to a process, the process holds the CPU till it gets
terminated or it reaches a waiting state

 In non preemptive scheduling, a scheduler executes jobs in the following two situations.
 When a process switches from running state to the waiting state.

 When a process terminates.


CPU Scheduler

Preemptive Scheduling
In preemptive scheduling if once a process has been given the CPU can
taken away.

 When the operating system decides to favor another process it preempts


the currently executing process.

The process is again placed back in the ready queue if that process still has
CPU burst time remaining.

That process stays in ready queue till it gets next chance to execute.
Difference Between Preemptive and Non-Preemptive Scheduling in OS
Preemptive Scheduling Non-preemptive Scheduling
A processor can be preempted to execute the different Once the processor starts its execution, it must finish it
processes in the middle of any current process execution. before executing the other. It can't be paused in the middle.

CPU utilization is more efficient compared to Non- CPU utilization is less efficient compared to preemptive
Preemptive Scheduling. Scheduling.
Waiting and response time of preemptive Scheduling is less. Waiting and response time of the non-preemptive
Scheduling method is higher.
Preemptive Scheduling is prioritized. The highest priority When any process enters the state of running, the state of
process is a process that is currently utilized. that process is never deleted from the scheduler until it
finishes its job.

Preemptive Scheduling algorithm can be pre-empted that is In non-preemptive scheduling process cannot be Scheduled
the process can be Scheduled

In this process, the CPU is allocated to the processes for a In this process, CPU is allocated to the process until it
specific time period. terminates or switches to the waiting state.

Preemptive algorithm has the overhead of switching the Non-preemptive Scheduling has no such overhead of
process from the ready state to the running state and vice- switching the process from running into the ready state.
versa.
Scheduling Criteria
 There are several different criteria to consider when trying to select the "best" scheduling algorithm for a
particular situation and environment, including:

 CPU utilization - Ideally the CPU would be busy 100% of the time, so as to no
waste of CPU cycles. On a real system CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)

 Throughput - Number of processes completed per unit time. May range from 10 /
second to 1 / hour depending on the specific processes.

 Turnaround time - Time required for a particular process to complete, from


submission time to completion.
Scheduling Criteria….

 Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.

 Response time - Amount of time it takes from when a request was


submitted until the first response is produced.
 Note: it is the time till the first response and not the completion of
process execution (final response).

In general CPU utilization and Throughput are maximized and other factors
are reduced for proper optimization.
Scheduling Algorithms

We'll discuss four major scheduling algorithms here which are following

1.First Come First Serve(FCFS) Scheduling

2.Shortest-Job-First(SJF) Scheduling

3.Priority Scheduling

4.Round Robin(RR) Scheduling


FCFS
First Come First Serve
CPU Scheduling Algorithm
FCFS
 First come first serve (FCFS) scheduling algorithm simply schedules the jobs

according to their arrival time.

 The job which comes first in the ready queue will get the CPU first. The lesser the

arrival time of the job, the sooner will the job get the CPU.

 FCFS scheduling may cause the problem of starvation if the burst time of the first

process is the longest among all the jobs.


Advantages of FCFS
 Its Simple to understand

 Easy to Implement

 Strait forward, First come, First serv


Disadvantages of FCFS
 The scheduling method is non preemptive, the process will run to the completion.

 Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.

 Although it is easy to implement, but it is poor in performance since the average waiting

time is higher as compare to other scheduling algorithms.


Shortest Job First (SJF)
Shortest Job First (SJF)

 Shortest Job First (SJF) is an algorithm in which the process having the smallest
execution time is chosen for the next execution.

 This scheduling method can be preemptive or non-preemptive.

 It significantly reduces the average waiting of processes awaiting for execution.

 It can improve process throughput by making sure that shorter jobs are
executed first, hence possibly have a short turnaround time.
Advantages of SJF
 Maximum throughput

 Minimum average waiting and turnaround time

Disadvantages of SJF
 May suffer with the problem of starvation for long processes

 It is not implementable because the exact Burst time for a process can't be known in
advance.
Priority Scheduling
Algorithm
What is Priority Scheduling?

 Priority Scheduling is a method of scheduling processes that is based on priority.

 The One with the highest priority among all the available processes will be given the CPU next.

 Where as jobs with equal priorities are carried out on a round-robin or FCFS basis.

 In priority scheduling, a number is assigned to each process that indicates its priority level.

 Lower the number, higher is the priority.

 In this type of scheduling algorithm, if a newer process arrives, that is having a higher priority
than the currently running process, then the currently running process is preempted.
Types of Priority Scheduling

 Preemptive Scheduling
 Tasks are assigned with their priorities.

 In the preemptive priority scheduling, the job which is being executed can be
stopped at the arrival of a higher priority job.

 Non-Preemptive Scheduling
 In this type of scheduling method, the CPU has been allocated to a specific process.

 The process will release the CPU either by switching context or terminating.
Round Robin Scheduling
Algorithm
What is Round-Robin Scheduling?
 The name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turns.

 In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for a limited
time slice.

 This algorithm also offers starvation free execution of processes.


Characteristics of Round-Robin Scheduling
 Round robin is a pre-emptive algorithm

 The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.

 The process that is preempted is added to the end of the queue.

 Time slice should be minimum, which is assigned for a specific task that needs to be
processed. However, it may differ OS to OS.

 Round robin is one of the oldest, fairest, and easiest algorithm.

 Widely used scheduling method in traditional OS.

You might also like