Processor management
Process
A process (task) is a single instance of an executable program. The sequential fashion is used to execute of
processes.
A process will need definite resources to complete its task — CPU time, memory, files and I/O devices.
Process is an active entity, which requires a set of resources to perform its function, including a processor
and special registers.
Suppose , computer programs are written in a text file and when it executes, it becomes a process which
completes all tasks stated in the program.
A program is converted process when it is loaded into the memory.
All these processes may execute concurrently (parallelism occur).
It can be divided into four sections (a basic layout of a process inside main memory) ─ stack, heap, text and
data.
Process Contd..
Stack
The process, Stack contains the temporary data such as method/function parameters, return address and
local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This contains the current activity denoted by the value of
Program Counter and contents of the processor's registers.
Data
This section contains the global and static variables.
Program
A program (job) is a portion of code which may be one or more lines.
A computer program is written by a programmer using programming language.
A program (job) is a unit of work that has been submitted by user to OS.
To compare a program with a process, a process is a dynamic instance of a computer program.
Thread
A portion of a process that can run independently.
A thread is also called a lightweight process.
Threads provide a way to improve application performance thorough parallelism.
Processor ( CPU)
A component of computer that performs calculations and executes programs.
In Multiprogramming Concept, the processor be “allocated” to each program or to each process for a period
of time and “deallocated” at an right moment.
Interrupt
Call for help.
Activates higher-priority program.
Process Life Cycle
The process passes through different states, when it executes.
A process can have one of the following five states at a time.
Start
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting for the processor to have
allocated to them by the OS, so that they can execute. Process may come into this state after Start state or
while executing it but interrupted by the scheduler to assign CPU to some other process.
Running
Once the process has been allocated to a processor by the OS scheduler, the process state is set to running
and the processor executes its instructions.
Process Life Cycle Contd…
Waiting
Process comes into the waiting state if it requests to wait for a resource, such as waiting for user input, or
waiting for a file to become available.
Terminated/Exit
Once the process finishes its execution, or it is terminated by the OS, it is moved to the exit state where it
waits to be removed from main memory.
Job and Process Status
Jobs move through the system with five states, Called job status/ process status.
HOLD Hold Admitted Finished
READY Interrupt Exit
WAITING Ready Running
RUNNING
Scheduler dispatch
FINISHED I/O or event completion I/O or event wait
Waiting
Transition Among Process States
User submits job
Job accepted
Put on HOLD and placed in queue
Job state changes from HOLD to READY
Indicates job waiting for CPU
Job state changes from READY to RUNNING
When selected for CPU and processing
Job state changes from RUNNING to WAITING
Requires unavailable resources
Job state changes to FINISHED
Job completed (successfully or unsuccessfully)
Process Control Block (PCB)
It is a data structure that contains basic information about the job maintained by the OS for every process.
Contains basic job information
What it is
Where it is going
How much processing completed
Where stored
How much time spent using resources
A PCB keeps all the information wanted to keep track of a process.
Process Control Block Components
Process identification
Unique
Process status
Job state (HOLD, READY, RUNNING, WAITING)
Process state
Process status word, register contents, main memory
information, resources, process priority
Accounting
Performance measurements
CPU time, total time, memory occupancy, I/O operations,
number of input records read, etc.
IO status information
This includes a list of I/O devices allocated to the process.
Etc.
Process Scheduling Policies
The aim of multiprogramming is to maximize CPU utilization by executing some process at all times.
The objective of time sharing is to switch the CPU among processes so frequently that users can interact with
each program while it is executing.
Resolve some system limitations
Finite number of resources (disk drives, printers, tape drives)
Some resources cannot be shared once allocated (printers)
Some resources require user involvement (tape drives)
Characteristics of Good Scheduling Policy
Maximize throughput
Run the jobs as many as possible in a specific amount of time
Minimize response time
Quickly turn around interactive requests
Minimize turnaround time
Move entire job in and out of system quickly
Minimize waiting time
Move job out of READY queue quickly
Maximize CPU efficiency
Keep CPU busy 100 percent of time
Ensure fairness for all jobs
Give every job equal utilization of CPU and I/O time
Process Scheduling Policy Contd…
Problem
Job wants CPU for a long time before I/O request issued
Builds up READY queue and empties I/O queues
Creates unacceptable system imbalance
Solution
Interrupt
Used by Process Scheduler upon predetermined end of time share
Current job activity suspended
Reschedules job into READY queue
Context Switch
It is the method to store and restore the state/context of a CPU in PCB.
A process execution can be resumed from the same point at a later time.
Here, a context switcher allows multiple processes to share a single CPU.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the PCB.
After this, the state for the process to execute next is loaded from its own
PCB and used to set the PC, registers, etc. At that point, the second
process can start executing.
Context switches are computationally intensive since register and
memory state must be saved and restored.
Types of process scheduling
Preemptive scheduling policy
It is based on priority where a scheduler may preempt/block a low priority running process anytime
when a high priority process enters into a ready state.
Interrupts processing of a job and transfers the CPU to another job.
Non-preemptive scheduling policy
Functions without external interrupts.
Once a job allocates processor and starts execution, it remains in RUNNING state until it completes
its allotted time.
Until it issues an I/O request (natural wait) or until it is finished (exception for infinite loops).
Process Scheduling Algorithms
Based on specific policy (Preemptive/Non- preemptive)
Allocate CPU and transfers job through system
Mainly five algorithm types
First-come, first-served (FCFS)
Shortest job next (SJN)
Priority scheduling
Shortest remaining time (SRT)
Round robin
First Come First Served (FCFS)
Jobs are executed on first come, first serve basis.
It is a non-preemptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
FCFS Example
Process CPU Burst (Turnaround Time)
A 15 milliseconds
B 2 milliseconds
C 1 millisecond
If they arrive in order of A, B, and C.
What does the time line look like?
First Come First Served (FCFS) Contd…
Wait time of each process is as follows −
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job Next (SJN)
SJN is also known as shortest job first (SJF).
Non-preemptive type scheduling algorithm.
This is a best method to minimize waiting time.
Impossible to implement in shared systems where CPU time is not known in advance.
The processer should know how much time process will take to execute in advance.
Shortest Job Next Example
Wait time of each process is as follows −
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Priority Based Scheduling
This is a non-preemptive algorithm and one of the most shared scheduling algorithms in batch systems.
A priority is assigned to each process. The process with maximum priority is to be executed first and so on.
The processes with equal priority are executed on first come first served basis.
The priority can be decided based on memory requirements, time requirements or any other resource
requirements.
Priority Based Scheduling Example
Wait time of each process is as follows −
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Context Switching and Preemptive Algorithms
When Job A is preempted
All of its processing information must be saved in its PCB for later utilization if required (when Job A’s
execution is continued).
Contents of Job B’s PCB are loaded into appropriate registers so it can start running again (context
switch).
Later, when Job A is further assigned to processor then another context switch is performed.
The information of all preempted jobs is stored in its own PCB.
Contents of Job A’s PCB are loaded further into appropriate registers.
Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.
Here, each process is assigned a fix time to execute, it is called a quantum.
Once a process is executed for a given time period, it is preempted (if required) and other process executes
for a given time period.
Context switching is used to save the information of preempted processes.
Priority Based Scheduling Example
Wait time of each process is as follows −
Average Wait Time: (9+2+12+11) / 4 = 8.5
Deadlocks
In a multiprogramming system, processes request resources. If those resources are actuality used by other
processes then the process enters into a waiting state. Though, if other processes are also in a waiting state,
then deadlock occur.
Definition –
A set of processes is in a deadlock state if every process in the set is waiting for an event (resource) that can
only be caused by some other process in the same set.
Example –
Process-1 requests the printer, gets it
Process-2 requests the tape unit, gets it Process-1 and Process-2 Process-1 requests the tape unit,
waits are deadlocked!
Process-2 requests the printer, waits
Resources
Resource: a process uses it
Normally limited (at least somewhat)
Examples of computer resources
Printers
Locks
Tapes
Tables (in a database)
Processes need access to resources in practical order
Two types of resources:
Preemptable resources: taken away from a process with no ill effects
Nonpreemptable resources: cause the process to fail if taken away
When do deadlocks happen?
Suppose….
Process 1 holds resource A and requests resource B
Process 2 holds resource B and requests resource A
Both can be deadlocked, with neither able to proceed
Deadlocks happen when …
Processes are granted limited access to devices/resources
Each deadlocked process requests a resource said by other deadlocked process
In deadlock, none of the processes can Process 1 Process 2
Execute
A B
Release resources
B A
Conditions for Deadlock
Conditions to produce a deadlock phenomenon.
Mutual exclusion
Each resource is assigned to at most one process.
Hold and wait
A process holding resources can request more resources.
No preemption
Previously granted resources cannot be compulsorily taken away.
Circular wait
There must be a circular sequence of two/more processes where one is waiting for a resource held
by the next member of the sequence.
Resource allocation graphs
A B
Resource allocation showed by directed graphs
Example 1:
R S Resource R assigned to process A
Example 2:
Process B is requesting / waiting for
resource S
T Example 3:
Process C holds T, waiting for U
C D
Process D holds U, waiting for T
U C and D are in deadlock!
Deadlock Prevention
Deadlock can be totally prohibited.
Confirm that at least one of the conditions for deadlock never happens
Mutual exclusion
Hold & wait
No preemption
Circular wait
Eliminating mutual exclusion
All resources of the computer system have not sharable. Some resources like printers, processing units are
non-sharable. So it is not possible to stop deadlocks by denying mutual exclusion.
Principle –
Avoid assigning resource when not absolutely necessary
Few processes likely truly claim the resource
Deadlock Prevention Contd…
Attacking “HOLD and WAIT”
Processes require to request resources before starting
A process at no time has to wait for resources that it needs
This can existing problems
A process may not know required resources at start of execution
This links up resources that other processes could be holding
Processes will incline and request resources they force necessity
A process must hand over all resources before making a new request
Process is approved all previous resources as well as the new ones
Deadlock Prevention Contd…
Attacking “No Preemption”
This is not normally a feasible choice
If a process that is holding some resources and requests another resource and that resource cannot be
allocated to it, then it must release all resources that are presently assigned to it
A process requests some resources, if they are available, then allocate them.
If a resource it requested is not available, then check whether it is being used or it is allocated to some other
process waiting for other resources. If that resource is not being used, then the OS preempts it from the
waiting process and allocate it to the requesting process. If that resource is used, the requesting process
must wait