UNIT-II
Process and CPU Scheduling - Process concepts and scheduling, Operations on processes,
Cooperating Processes, Threads, and Inter process Communication, Scheduling Criteria,
Scheduling Algorithms, Multiple -Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec
1. DIFFERENCE BETWEEN PROGRAM AND PROCESS
Program Process
A program is a set of instructions. When a program is executed, it is known as
process.
A program is a passive/static entity A process is an active/dynamic entity.
A process has a limited life span. It is created
A program has a longer life span. It is
when execution starts and terminates when
stored on the hard disk in the computer.
execution is finished.
Process memory is divided into four sections for efficient working:
max Stack
Heap
Data
Text
0
The Text section contains the compiled code of program code.
The Data section contains the global and static variables required to run the process.
The Heap is used for the dynamic memory allocation, and is managed via calls to new,
delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables
when they are declared. It also deals with function calls.
2. PROCESS STATE DIAGRAM
Processes in the operating system can be in any of the following five states:
NEW: A program present in secondary memory and wants to execute. It will be picked up
by OS and loaded into the main memory latter is called a new process.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
READY: The process is loaded into the main memory by OS. The process present in the
main memory is called ready state process. There can be many processes in the ready state.
RUNNING: One of the processes from the ready state will be chosen by the OS depending
upon the scheduling algorithm and given to the processor. The instructions within this
process are executed by the processor and that process is said to be in running state.
WAITING: Whenever the running process requests access to I/O or needs other events to
occur, it enters into the waiting state.
TERMINATED: Whenever the running process execution completes or aborted in the
middle, the process enters into terminated state.
3. PROCESS CONTROL BLOCK
Process Control Block (PCB) is a data structure that contains information of the process related
to it. The process control stores many data items that are needed for efficient process
management. These data items are explained with the help of the given diagram:
Process ID
Process State
Program Counter
CPU Registers
List Of Open Files
. . .
CPU Scheduling Information
Memory Management Information
I/O Status Information
Accounting Information
Figure: Process control block
Process ID: A unique identification number associated with each process.
Process State: This specifies the process state i.e. new, ready, running, waiting or
terminated.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
Program Counter: This contains the address of the next instruction that needs to be
executed in the process.
CPU Registers: This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files: These are the different files that are associated with the process.
CPU Scheduling Information: The process priority, pointers to scheduling queues etc. is
present in the CPU scheduling information. This may also include other scheduling
parameters.
Memory Management Information: The memory management information includes the
page tables or the segment tables depending on the memory system used. It also contains the
value of the base registers, limit registers etc.
I/O Status Information: This information includes the list of I/O devices used by the
process, open file tables etc.
Accounting information: The time limits, account numbers, amount of CPU used, process
numbers etc. are all a part of the PCB accounting information.
4. PROCESS SCHEDULING
The act of determining which process is in the ready state should be moved to the running
state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU. Scheduling fell into one of the two
general categories:
Non Pre-emptive Scheduling: in non-preemptive scheduling, if the CPU is allocated,
then it will not be taken back until the process completes its execution.
Pre-emptive Scheduling: In preemptive scheduling, the CPU can be taken back from the
process at any time during the execution of the process.
Scheduling Queues
All processes, upon entering into the system, are stored in the Job Queue.
Processes in the Ready state are placed in the Ready Queue.
Processes waiting for a device to become available are placed in Device Queues. There
are unique device queues available for each I/O device.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
A new process is initially put in the Ready queue. It waits in the ready queue until it is selected
for execution (or dispatched) as per scheduling algorithm. Once the process is assigned to the
processor and is executing, one of the following events can occur:
The process could issue an I/O request, and then be placed in the I/O queue. (The
process is placed in waiting state).
The process could create a new sub-process and wait for its termination. (The process is
placed in waiting state).
The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue. (The process is placed in ready state).
Job queue
The process in the waiting state moves to ready state, after the waiting event of the process
occurs. This cycle repeated until the process terminates.
Types of Schedulers
There are three types of schedulers available. They are:
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler: It is also called a job scheduler. It selects processes from the job queue
and loads them into memory for execution. The primary objective of the job scheduler is to
provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the
degree of multiprogramming.
Short Term Scheduler: It is also called as CPU scheduler. It selects one of the processes in the
ready queue as per scheduling algorithm and allocates processor to execute it. It shifts the
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
process from ready state to running state. Its main objective is to increase system performance.
Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler: Medium-term scheduling is a part of swapping. When a process is
in waiting state, it cannot make any progress towards completion. In this condition, the waiting
process is moved from main memory into secondary memory to free up some main memory
space to load other process. This process is called swapping, and the process is said to be
swapped out or rolled out.
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
It is a process swapping
It is a job scheduler It is a CPU scheduler
scheduler.
Speed is lesser than short term Speed is fastest among other Speed is in between both short
scheduler two and long term scheduler.
It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.
It is almost absent or minimal in It is also minimal in time It is a part of Time sharing
time sharing system sharing system systems.
It selects processes from pool It can reintroduce the process
It selects those processes which
and loads them into memory for into memory and execution
are ready to execute
execution can be continued.
5. CONTEXT SWITCH
A context switch is the mechanism to store and restore the state or context of a CPU in PCB
(Process Control block) so that a process execution can be resumed from the same point at a later
time. Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system. When the scheduler
switches the CPU from one process to execute another process, the state of the current running
process is stored into the PCB. After this, the state for other process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the second process can start
executing.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
6. OPERATIONS ON THE PROCESS
Process Creation: Through appropriate system calls, such as fork( ) or spawn( ), processes
may create other processes. The process which creates other process is termed the parent
process and the created sub-process is termed its child process. Each process is given an
integer identifier, termed as process identifier, or PID. Once the process is created, it will be
ready and come into the ready queue (main memory) for the execution.
Process Scheduling: Out of the many processes present in the ready queue, the Operating
system chooses one process and start executing it. Selecting the process which is to be
executed next, is known as scheduling.
Process Execution: The CPU starts executing the scheduled process. When a process enters
into blocked or waiting state during the execution, then the processor starts executing the
other processes.
Process Termination: Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the process gets terminated by
the Operating system. The running process can enter into termination state by using system
call exit( ). The parent process terminates only after terminating all its child processes. If the
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
parent process terminates before termination of its child processes, then the child process is
called as zombie process or orphan process. These are orphan processes are killed off.
7. INTER PROCESS COMMUNICATION (IPC)
Independent process: A process is said to be independent when it cannot affect or be affected
by any other processes that are running the system. In other words, any process that does not
share data with other processes is called as independent process.
Cooperating process: Cooperating processes are those that can affect or are affected by other
processes running on the system. Cooperating processes may share data with each other
processes.
Reasons for cooperating processes
There may be many reasons for the requirement of cooperating processes. Some of these are
given as follows:
Modularity: Modularity involves dividing complicated tasks into smaller subtasks.
These subtasks can complete by different cooperating processes. This leads to faster and
more efficient completion of the required tasks.
Information Sharing: Sharing of information between multiple processes can be
accomplished using cooperating processes. This may include access to the same files. A
mechanism is required so that the processes can access the files in parallel to each other.
Convenience: There are many tasks that a user needs to do such as compiling, printing,
editing etc. It is convenient if these tasks can be managed by cooperating processes.
Computation Speedup: Subtasks of a single task can be performed parallel using
cooperating processes. This increases the computation speedup as the task can be
executed faster. However, this is only possible if the system has multiple processing
elements.
Methods of Cooperation (IPC)
Cooperating processes can coordinate and communicate with each other using shared data or
messages. Details about these are given as follows:
Cooperation by Sharing: The cooperating processes can cooperate with each other
using shared data such as memory, variables, files, databases etc. Critical section is used
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
to provide data integrity and writing is mutually exclusive to prevent inconsistent data. A
diagram that demonstrates cooperation by sharing is given as follows:
Process P1
Shared Data
Process P2
Kernel
In the above diagram, Process P1 and P2 can cooperate with each other using shared data
such as memory, variables, files, databases etc.
Cooperation by Communication: The cooperating processes can cooperate with each
other using messages. This may lead to deadlock if each process is waiting for a message
from the other to perform a operation. Starvation is also possible if a process never
receives a message. A diagram that demonstrates cooperation by communication is given
as follows:
Process P1 M
Process P2 M
Kernel
M
In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate. Process P1 send message ‘M’ to the kernel. The kernel shares the message
‘M’ to the process P2.
8. SCHEDULING CRITERIA
The "best" scheduling algorithm can be decided based on many criteria. They are:
CPU Utilization: CPU utilization is the amount of time that the processor is busy over a
period of time. It is also used to estimate system performance. To make out the best use of
CPU and not to waste any CPU cycle, CPU would be working most of the time (Ideally
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
100% of the time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
Throughput: It is the total number of processes completed per unit time or rather say total
amount of work done in a unit of time.
Turnaround Time: It is the amount of time taken to execute a particular process, i.e. the
interval from time of submission of the process (arrival time) to the time of completion of the
process (Finish time). Turnaround Time = Finish Time - Arrival Time
Waiting Time: The sum of the periods of time spent waiting in the ready queue waiting for
CPU. Waiting Time = Turnaround Time - Burst Time
Load Average: It is the average number of processes residing in the ready queue waiting for
their turn to get into the CPU.
Response Time: Amount of time it takes from when a request was submitted until the first
response is produced. Remember, it is the time till the first response and not the completion
of process execution (final response).
Note: In general CPU utilization and Throughput are maximized and other factors are reduced
for proper optimization.
9. CPU SCHEDULING ALGORITHMS
The CPU scheduling algorithms decides the order of execution of the processes to achieve
maximum CPU utilization.
Important Terminologies
Burst Time/Execution Time: It is a time required by the process to complete execution.
It is also called running time.
Arrival Time: It is the time when a process first time enters into a ready state
Finish Time: It is the time when process completes the execution.
Turnaround Time: It is the time interval from arrival time of the process to the finish
time of the process. Turnaround Time = Finish Time - Arrival Time
Waiting Time: The sum of the periods of time spent waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time
Average waiting time: It is the average of the waiting times of all the processes executed
over a period of time.
Average turnaround time: It is the average of the turnaround times of all the processes
executed over a period of time.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
The popular CPU scheduling algorithms are:
First Come First Serve(FCFS) Scheduling
Shortest-Job-First(SJF) Scheduling
Priority Scheduling
Round Robin(RR) Scheduling
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling
i. First Come First Serve Scheduling: First Come First Served (FCFS) scheduling
algorithm schedules the execution of the processes based on their arrival time. The process
which arrives first, gets executed first, the process arrives second gets executed next and so on.
It is implemented by using the FIFO (First in First out) queue. In this, a new process enters
through the tail of the queue, and the scheduler selects a process from the head of the queue
for execution. A perfect real life example of FCFS scheduling is buying tickets at ticket
counter.
It is the easiest and most simple CPU scheduling algorithm.
Average waiting time is very high as compared to other algorithms.
It is Non Pre-emptive algorithm.
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with Arrival times = ( 0,
0, 2, 5, 8 ) and Burst Time = ( 8, 6, 3, 5, 2 ) respectively. Find average waiting time and average
turnaround time for the above processes using FCFS CPU scheduling algorithm.
Solution: The GANTT chart is:
P1 P2 P3 P4 P5
0 8 14 17 22 24
PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 8 0 8 8 0
P2 0 6 8 14 14 8
P3 2 3 14 17 15 12
P4 5 5 17 22 17 12
P5 8 2 22 24 16 14
Average turnaround time = (8+14+15+17+16)/5
= 14.0
Average waiting time = (0+8+12+12+14)/5
= 9.2
ii. Shortest Job First (SJF) Scheduling: Shortest job first (SJF) scheduling algorithm
schedules the execution of the processes based on their burst time. The process with the
lowest burst time among the list of available processes is executed first, second shortest burst
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
time process executed next and so on. If two processes have the same bust time then FCFS is
used to break the tie. This scheduling algorithm can be preemptive or non-preemptive.
This is the best approach to minimize average waiting time.
To successfully implement it, the burst time of the processes should be known to the
processor in advance, which is practically not possible all the time.
May suffer with the problem of starvation.
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with Arrival times = ( 0,
0, 2, 5, 8 ) and Burst Time = ( 8, 6, 3, 5, 2 ) respectively. Find average waiting time and average
turnaround time for the above processes using SJF CPU scheduling algorithm.
Solution: The GANTT chart is:
P2 P3 P5 P4 P1
0 6 9 11 16 24
PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 8 16 24 24 16
P2 0 6 0 6 6 0
P3 2 3 6 9 7 4
P4 5 5 11 16 11 6
P5 8 2 9 11 3 1
Average turnaround time = (24+6+9+16+11)/5
= 13.2
Average waiting time = (16+0+4+6+1)/5
= 5.4
iii. Shortest Remaining Time Next (SRN): Shortest Remaining Time Next (SRN) is the
preemptive version of Shortest Job First (SJF) algorithm. Initially, it starts like SJF, but when a
new process arrives with burst time shorter than remaining burst time of currently executing
process, then the currently executing process is stopped and newly arrived process executes.
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with Arrival times = ( 0,
0, 2, 3, 5 ) and Burst Time = ( 9, 8, 4, 2, 4 ) respectively. Find average waiting time and average
turnaround time for the above processes using preemptive version of SJF/Shortest Job
Next(SJN)/ Shortest Remaining Time Next (SRN) CPU scheduling algorithm.
Solution:The GANTT chart is:
P2 P3 P4 P3 P5 P2 P1
0 2 3 5 8 12 18 27
PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 9 18 27 27 18
P2 0 8 0 18 18 10
P3 2 4 2 8 6 2
P4 3 2 3 5 2 0
P5 5 4 8 12 7 3
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
Average turnaround time = (27+18+8+5+12)/5
= 14
Average waiting time = (18+10+2+0+3)/5
= 6.6
iv. Priority Scheduling: Priority scheduling algorithm schedules the execution of the
processes based on their priorities. A priority value is associated with each process. The
process with the highest priority among the list of available processes is executed first, second
highest priority process executed next and so on. If two processes have the same priority then
FCFS is used to break the tie. This scheduling algorithm can be preemptive or non-
preemptive.
Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling
algorithm if a new process arrives with a higher priority than the current running process,
the incoming process is put at the head of the ready queue, which means after the
execution of the current process it will be processed.
Preemptive Priority Scheduling: If a new process arrives with higher priority than the
currently running process, then execution of current process is stopped and the new
process with higher priority gets executed.
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with Arrival times = ( 0,
0, 2, 3, 5 ), Burst Time = ( 9, 8, 4, 2, 4 ) and their Priority values = (4,3,2,5,1) respectively. Find
average waiting time and average turnaround time for the above processes using
i. Priority CPU scheduling algorithm
ii. Preemptive version of Priority CPU scheduling algorithm.
Solution:
i. Priority CPU scheduling algorithm : The GANTT chart is
P2 P5 P3 P1 P4
0 8 12 16 25 27
PROCESS ARRIVAL BURST PRIORITY START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 9 4 16 25 25 16
P2 0 8 3 0 8 8 0
P3 2 4 2 12 16 14 10
P4 3 2 5 25 27 24 22
P5 5 4 1 8 12 7 3
Average turnaround time = (25+8+14+24+7)/5
= 15.6
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
Average waiting time = (16+0+10+22+3)/5
= 10.2
ii. Preemptive version of Priority CPU scheduling algorithm: The GANTT chart is
P2 P3 P5 P3 P2 P1 P4
0 2 5 9 10 16 25 27
PROCESS ARRIVAL BURST PRIORITY START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 9 4 16 25 25 16
P2 0 8 3 0 16 16 8
P3 2 4 2 2 10 8 4
P4 3 2 5 25 27 24 22
P5 5 4 1 5 9 4 0
Average turnaround time = (25+16+8+24+4)/5
= 15.4
Average waiting time = (16+8+4+22+0)/5
= 10.0
v. Round Robin Scheduling: A certain fixed time is defined in the system which is called
time quantum or time slice. Each process present in the ready queue is executed for that
time quantum, if the execution of the process is completed during that time then the process
will terminate else the process will go back to the ready queue and waits for the next round
to complete the remaining execution part. Context switching is used to save the execution
state of the preempted process. This algorithm avoids starvation problem.
Ready Queue
New process CPU Execution completed
EXIT
Execution not completed
Round Robin is the preemptive process scheduling algorithm.
Time slice should be minimum, otherwise it becomes like FCFS.
Round robin is a is clock-driven CPU scheduling algorithm
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with Arrival times = ( 0,
2, 3, 4, 7 )and Burst Time = ( 9, 8, 4, 6, 8 ) respectively. Find average waiting time and average
turnaround time for the above processes using Round Robin CPU Scheduling algorithm. Use
time quantum / time slice = 3.
Solution: The GANTT chart is
P1 P2 P3 P1 P4 P2 P5 P3 P1 P4 P2 P5
0 3 6 9 12 15 18 21 22 25 28 30 35
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING
TIME TIME TIME TIME TIME TIME
P1 0 9 0 25 25 16
P2 2 8 3 30 28 20
P3 3 4 6 22 19 15
P4 4 6 12 28 24 18
P5 7 8 18 35 28 20
Average turnaround time = (25+28+19+24+28)/5
= 24.8
Average waiting time = (16+20+15+18+20)/5
= 17.8
vi. Multilevel Queue Scheduling: A multi-level queue scheduling algorithm partitions the
ready queue into several separate queues. The processes are classified into different groups.
Each group of the processes is assigned to one queue and each queue has its own scheduling
algorithm. Let us consider an example of a multilevel queue-scheduling algorithm with five
queues:
System Processes
Interactive Processes
Interactive Editing Processes
Batch Processes
Student Processes.
Each queue has absolute priority over lower-priority queues. For example, no process in the
batch queue could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the ready
queue while a batch process was running, the batch process will be preempted.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
vii. Multilevel Feedback Queue Scheduling: In a multilevel feedback queue-scheduling
algorithm, initially all the processes enter into a single queue. Latter, it allows a process to
move between queues. If a process uses too much CPU time, it will be moved to a lower-
priority queue. Similarly, a process that waits too long in a lower-priority queue may be
moved to a higher-priority queue. This form of aging prevents starvation.
10. THREADS
Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers. Threads are also known as Lightweight processes. Threads are popular way to improve
application through parallelism. The CPU switches rapidly back and forth among the threads
giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multiple processes can
be executed parallel by increasing number of threads.
Types of Thread
There are two types of threads:
1. User Threads
2. Kernel Threads
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
User threads, are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously.
Multithreading Models: The user threads must be mapped to kernel threads, by one of
the following strategies:
Many to One Model
One to One Model
Many to Many Model
Many to One Model
In the many to one model, many user-level threads are all mapped onto a single kernel
thread.
Thread management is handled by the thread library in user space, which is efficient in
nature.
One to One Model
The one to one model creates a separate kernel thread to handle each and every user thread.
Most implementations of this model place a limit on how many threads can be created.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
Many to Many Model
The many to many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
Users can create any number of the threads.
Blocking the kernel system calls does not block the entire process.
Processes can be split across multiple processors.
Thread Libraries: Thread libraries provide programmers with API for creation and
management of threads. Thread libraries may be implemented either in user space or in kernel
space. The user space involves API functions implemented solely within the user space, with no
kernel support. The kernel space involves system calls, and requires a kernel with thread library
support.
Three types of Thread
1. POSIX Pitheads, may be provided as either a user or kernel library, as an extension to
the POSIX standard.
2. Win32 threads, are provided as a kernel-level library on Windows systems.
3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation
of threads is based upon whatever OS and hardware the JVM is running on, i.e. either
Pitheads or Win32 threads depending on the system.
Benefits of Multithreading
1. Responsiveness
2. Resource sharing, hence allowing better utilization of resources.
3. Economy. Creating and managing threads becomes easier.
4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.
5. Context Switching is smooth. Context switching refers to the procedure followed by CPU
to change from one task to another.
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
11. MULTIPLE-PROCESSOR SCHEDULING
In multiple-processor scheduling multiple CPU’s are available. The OS can use any available
processor to run any process in the queue. Hence multiple processes can be executed parallel.
However multiple processor scheduling is more complex as compared to single processor
scheduling.
Approaches to Multiple-Processor Scheduling:
i. Asymmetric Multiprocessing vs Symmetric Multiprocessing: In Asymmetric
Multiprocessing, all the scheduling decisions and I/O processing are handled by a single
processor called the Master Server and the other processors executes only the user code.
This is simple and reduces the need of data sharing.
In Symmetric Multiprocessing, each processor is self scheduling. All processes may be
in a common ready queue or each processor may maintain its own private queue for ready
processes. The scheduling proceeds further by having the scheduler for each processor
examine the ready queue and select a process to execute.
ii. Processor Affinity: When a process runs on a specific processor there are certain effects
on the cache memory. The data most recently accessed by the process populate the cache for
the processor and as a result successive memory access by the process is often satisfied in the
cache memory. Now if the process migrates to another processor, the contents of the cache
memory must be invalidated for the first processor and the cache for the second processor
must be repopulated. Because of the high cost of invalidating and repopulating caches, most
of the SMP(symmetric multiprocessing) systems try to avoid migration of processes from one
processor to another and try to keep a process running on the same processor. This is known
as PROCESSOR AFFINITY. There are two types of processor affinity:
a. Soft Affinity: When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called
soft affinity.
b. Hard Affinity: Some OS such as Linux provide some system calls to support Hard
Affinity which allows a process to migrate between processors.
iii. Load Balancing: Load Balancing is the phenomena which keeps the workload evenly
distributed across all processors in an SMP system. On SMP(symmetric multiprocessing), it
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
is important to keep the workload balanced among all processors to fully utilize the benefits of
having more than one processor else one or more processor will sit idle while other processors
have high workloads along with lists of processors awaiting the CPU. There are two general
approaches to load balancing :
a. Push Migration – In push migration a task routinely checks the load on each processor
and if it finds an imbalance then it evenly distributes load on each processors by
moving the processes from overloaded to idle or less busy processors.
b. Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task
from a busy processor for its execution.
iv. Multicore Processors: In multicore processors multiple processor cores are places on the
same physical chip. Each core has a register set to maintain its architectural state and thus
appears to the operating system as a separate physical processor. SMP systems that use
multicore processors are faster and consume less power than systems in which each processor
has its own physical chip.
However multicore processors may complicate the scheduling problems. When
processor accesses memory then it spends a significant amount of time waiting for the data to
become available. This situation is called MEMORY STALL. It occurs for various reasons
such as cache miss, which is accessing the data that is not in the cache memory. In such cases
the processor can spend upto fifty percent of its time waiting for data to become available
from the memory. To solve this problem recent hardware designs have implemented
multithreaded processor cores in which two or more hardware threads are assigned to each
core. Therefore if one thread stalls while waiting for the memory, core can switch to another
thread. There are two ways to multithread a processor :
a. Coarse-Grained Multithreading: In coarse grained multithreading a thread executes
on a processor until a long latency event such as a memory stall occurs, because of
the delay caused by the long latency event, the processor must switch to another
thread to begin execution. The cost of switching between threads is large.
b. Fine-Grained Multithreading: This multithreading switch between threads at a
much finer level mainly at the boundary of an instruction cycle. The architectural
design of fine grained systems includes logic for thread switching and as a result the
cost of switching between threads is small.
v. Virtualization and Threading: In this type of multiple-processor scheduling even a
single CPU system acts like a multiple-processor system. In a system with Virtualization, the
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
virtualization presents one or more virtual CPU to each of virtual machines running on the
system and then schedules the use of physical CPU among the virtual machines. Most
virtualized environments have one host operating system and many guest operating systems.
The host operating system creates and manages the virtual machines. Each virtual machine
has a guest operating system installed and applications run within that guest. Each guest
operating system may be assigned for specific use cases, applications or users including time
sharing or even real-time operation. Any guest operating-system scheduling algorithm that
assumes a certain amount of progress in a given amount of time will be negatively impacted
by the virtualization. The net effect of such scheduling layering is that individual virtualized
operating systems receive only a portion of the available CPU cycles, even though they
believe they are receiving all cycles and that they are scheduling all of those cycles.
Commonly, the time-of-day clocks in virtual machines are incorrect because timers take no
longer to trigger than they would on dedicated CPU’s. Virtualizations can thus undo the good
scheduling-algorithm efforts of the operating systems within virtual machines.
12. PROCESS MANAGEMENT SYSTEM CALLS - fork, exit, wait,
waitpid, exec
The process management is done with a number of system calls, each with a single (simple)
purpose. These system calls can then be combined to implement more complex behaviors. The
basic process management system calls are: fork( ), exec( ), wait( ), waitpid( ), exit( ) etc.
System Call Purpose
fork ( ) It is used to create a new process.
exec ( ) It run a new program
wait ( ) It is used by the parent process to know the exit status of the child.
waitpid( ) It is used by the parent process to know the exit status of a particular child.
exit( ) It is used to terminate the running process
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
fork( ) : This system call is used to create a new process. A parent process uses fork to
create a new child process. The child process is almost-exact duplicate of the parent process.
After fork, both parent and child executes the same program but in separate processes.
pid_t = fork( );
On successful execution of fork( ), the process ID (PID) of the child is returned to the parent,
and 0 is returned to the child. On failure, -1 is returned to the parent, no child process is
created, and errno is set appropriately.
exec( ):When a child process doesn’t wants to execute the same program as the parent, it
will use this system call. It loads a new process image into the current process space and runs
it from the entry point. This is known as an overlay. In this case, a new process is not created
but data, heap, stack etc. of the process are replaced by the new process.
exec( );
wait( ): This system call is used by the parent process to obtain the exit status of a child
process. When this system call is used, the execution of the parent process is suspended until
its child terminates. The signature of wait( ) is :
pid_t wait(int *status);
On success, it returns the PID of the terminated child. On failure (no child), it returns -1.
Waitpid( ) : When a parent process has more than one child processes, waitpid( ) system
call is used by the parent process to know the termination state of a particular child. The
signature of waitpid( ) is :
pid_t waitpid(pid_t pid, int *status, int options);
exit( ): The exit( ) system call is used by a program to terminate its execution. The operating
system reclaims resources that were used by the process after the exit( ) system call.
exit(int );
Zombie process: A zombie process or defunct process is a process that has completed
execution but still has an entry in the process table. This occurs for the child processes, where the
entry is still needed to allow the parent process to read its child's exit status: once the exit status
is read by parent process via the wait( ) system call, the zombie's entry is removed from the
process table and it is said to be "reaped". A child process always first becomes a zombie before
being removed from the resource table.
Orphan process: An Orphan process is a running child process whose parent process
execution completed or terminated
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR