Operating Systems -
Module II
Process Management –
Multithreading, Scheduling,
Synchronization
2
Process Management –
Multithreading, Scheduling,
Synchronization
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Chapter 4: Threads
Chapter 4: Threads
Overview
Multicore Programming
Multithreading Models
Thread Libraries
Implicit Threading
Threading Issues
Operating System Examples
Objectives
To introduce the notion of a thread—a fundamental unit
of CPU utilization that forms the basis of multithreaded
computer systems
To discuss the APIs for the Pthreads, Windows, and Java
thread libraries
To explore several strategies that provide implicit
threading
To examine issues related to multithreaded programming
To cover operating system support for threads in Windows
and Linux
Motivation
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be implemented by
separate threads
Update display
Fetch data
Spell checking
Answer a network request
Process creation is heavy-weight while thread creation is light-
weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
Multithreaded Server Architecture
Benefits
Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier than
shared memory or message passing
Economy – cheaper than process creation, thread switching lower
overhead than context switching
Scalability – process can take advantage of multiprocessor
architectures
Multicore Programming
Multicore or multiprocessor systems putting pressure on
programmers, challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
Multicore Programming (Cont.)
Types of parallelism
Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread
performing unique operation
As # of threads grows, so does architectural support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads
per core
Concurrency vs. Parallelism
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Single and Multithreaded Processes
User Threads and Kernel Threads
User threads - management done by user-level threads library
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
Kernel threads - Supported by the Kernel
Examples – virtually all general purpose operating systems, including:
Windows
Solaris
Linux
Tru64 UNIX
Mac OS X
• User threads are the threads that application programmers
would put into their programs. They are supported above the
kernel, without kernel support.
• Kernel threads are supported within the kernel of the OS itself.
All modern OS support kernel level threads, allowing the kernel
to perform multiple tasks simultaneously.
Multithreading Models
1. Many-to-One
2. One-to-One
3. Many-to-Many
Many-to-One
Many user-level threads mapped to
single kernel thread
One thread blocking causes all to
block
Multiple threads may not run in
parallel on muticore system because
only one may be in kernel at a time
Many-to-One
Thread management is handled by the thread
library in user space, which is very efficient.
a)If a blocking system call is made by one of
the threads, then the entire process blocks.
Thus blocking the other user threads from
continuing the execution.
b)Only one user thread can access the kernel
at a time, as there is only one kernel thread.
Thus the threads are unable to run in parallel
on multiprocessors.
Example :
a)Green threads of Solaris
b) GNU Portable Threads
implement the many-to- one model.
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
The one-to-one model creates a separate kernel
thread to handle each user thread.
One-to-one model overcomes the problems listed
above involving blocking system calls and the
splitting of processes across multiple CPUs.
However the overhead of managing the one-to-
one model is more significant, involving more
overhead and slowing down the system.
This model places a limit on the number of threads
created.
More concurrency than many-to-one
Number of threads per process sometimes restricted due
to overhead
Examples
Windows
Linux
Solaris 9 and later
Many-to-Many Model
Allows many user level threads
to be mapped to many kernel
threads
Allows the operating system to
create a sufficient number of
kernel threads
Solaris prior to version 9
Windows with the ThreadFiber
package
Many-to-Many Model
Users have no restrictions on the number
of threads created.
a)Blocking kernel system calls do not
block the entire process.
a)Processes can be split across
multiple processors.
b)Individual processes may be
allocated variable numbers of kernel
threads, depending on the number of
CPUs present and other factors.
c)This model is also called as two-tier
model.
d)It is supported by operating system
such as IRIX, HP-UX, and Tru64 UNIX.
Two-level Model
Similar to M:M, except that it allows a user thread to
be bound to kernel thread
Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
Thread Libraries
Thread library provides programmer with API for
creating and managing threads
Two primary ways of implementing
Library entirely in user space
Kernel-level library supported by the OS
Pthreads
May be provided either as user-level or kernel-level
A POSIX standard (IEEE 1003.1c) API for thread creation
and synchronization
Specification, not implementation
API specifies behavior of the thread library,
implementation is up to development of the library
Common in UNIX operating systems (Solaris, Linux, Mac OS
X)
Pthreads Example
Pthreads Example (Cont.)
Pthreads Code for Joining 10 Threads
Windows Multithreaded C Program
Windows Multithreaded C Program (Cont.)
Java Threads
Java threads are managed by the JVM
Typically implemented using the threads model provided
by underlying OS
Java threads may be created by:
Extending Thread class
Implementing the Runnable interface
Java Multithreaded Program
Java Multithreaded Program (Cont.)
Implicit Threading
Growing in popularity as numbers of threads increase,
program correctness more difficult with explicit threads
Creation and management of threads done by
compilers and run-time libraries rather than programmers
Three methods explored
Thread Pools
OpenMP
Grand Central Dispatch
Other methods include Microsoft Threading Building
Blocks (TBB), java.util.concurrent package
Thread Pools
Create a number of threads in a pool where they await work
Advantages:
Usually slightly faster to service a request with an existing thread than create
a new thread
Allows the number of threads in the application(s) to be bound to the size of
the pool
Separating task to be performed from mechanics of creating task allows
different strategies for running task
i.e.Tasks could be scheduled to run periodically
Windows API supports thread pools:
In multithreading process, thread is created for every service. Eg – In web
server, thread is created to service every client request.
Creating new threads every time, when thread is needed and then
deleting it when it is done can be inefficient, as –
Time is consumed in creation of the thread.
A limit has to be placed on the number of active threads in the system.
Unlimited thread creation may exhaust system resources.
An alternative solution is to create a number of threads when the process
first starts, and put those threads into a thread pool.
• Threads are allocated from the pool when a request comes, and
returned to the pool when no longer needed(after the completion of
request).
• When no threads are available in the pool, the process may have to
wait until one becomes available.
Benefits of Thread pool –
Thread creation time is not taken. The service is done by the
thread existing in the pool. Servicing a request with an existing
thread is faster than waiting to create a thread.
The thread pool limits the number of threads in the system. This
is important on systems that cannot support a large number
of concurrent threads.
The ( maximum ) number of threads available in a thread
pool may be determined by parameters like the number of
CPUs in the system, the amount of memory and the expected
number of client request.
OpenMP
Set of compiler directives and an API for C,
C++, FORTRAN
Provides support for parallel programming
in shared-memory environments
Identifies parallel regions – blocks of code
that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
#pragma omp parallel for
for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel
Threading Issues
1. Semantics of fork() and exec() system calls
2. Signal handling
1. Synchronous
2. asynchronous
3. Thread cancellation of target thread
1. Asynchronous
2. deferred
4. Thread-local storage
5. Scheduler Activations
Semantics of fork() and exec()
The fork( ) system call is used to create a separate,
duplicate process. When a thread program calls fork( ),
✓ The new process can be a copy of the parent, with all the
threads
✓ The new process is a copy of the single thread only (that
invoked the process)
✓ If the thread invokes the exec( ) system call, the program
specified in the parameter to exec( ) will be executed by
the thread created.
Does fork()duplicate only the calling thread or all
threads?
Some UNIXes have two versions of fork
exec() usually works as normal – replace the running
process including all threads
Signal Handling
Signals are used in UNIX systems to notify a process that a particular
event has occurred.
A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
Every signal has default handler that kernel runs when handling signal
User-defined signal handler can override default
For single-threaded, signal delivered to process
40 Signal handling cont….
1) All signals follow same path-
2) Signal is generated by the occurrence of a particular event.
3) A generated signal is delivered to a process.
4) Once delivered, the signal must be handled.
A signal can be invoked in 2 ways :
synchronous or asynchronous.
Synchronous signal – signal delivered to the same program.
Eg – illegal memory access, divide by zero error.
Asynchronous signal – signal is sent to another program.
Eg – Ctrl C
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Signal Handling (Cont.)
Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal
applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the
process
Thread Cancellation
Terminating a thread before it has finished
Thread to be canceled is target thread
Two general approaches:
Asynchronous cancellation terminates the target thread
immediately
Deferred cancellation allows the target thread to periodically
check if it should be cancelled
Pthread code to create and cancel a thread:
Thread Cancellation (Cont.)
Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
If thread has cancellation disabled, cancellation remains pending until
thread enables it
Default type is deferred
Cancellation only occurs when thread reaches cancellation point
I.e. pthread_testcancel()
Then cleanup handler is inv oked
On Linux systems, thread cancellation is handled through signals
Terminating the thread before it has completed its task is called thread
cancellation. The thread to be cancelled is called target thread.
Example : Multiple threads required in loading a webpage is suddenly
cancelled, if the browser window is closed.
Threads that are no longer needed may be cancelled in one of two ways:
1. Asynchronous Cancellation - cancels the thread immediately.
2. Deferred Cancellation – the target thread periodically check whether
it has to terminate, thus gives an opportunity to the thread, to terminate
itself in an orderly fashion.
In this method, the operating system will reclaim all the resources before
cancellation.
Thread-Local Storage
Thread-local storage (TLS) allows each thread to
have its own copy of data
Useful when you do not have control over the
thread creation process (i.e., when using a thread
pool)
Different from local variables
Local variables visible only during single function
invocation
TLS visible across function invocations
Similar to static data
TLS is unique to each thread
Scheduler Activations
Both M:M and Two-level models require
communication to maintain the appropriate number
of kernel threads allocated to the application
Typically use an intermediate data structure between
user and kernel threads – lightweight process (LWP)
Appears to be a virtual processor on which process can
schedule user thread to run
Each LWP attached to kernel thread
How many LWPs to create?
Scheduler activations provide upcalls - a
communication mechanism from the kernel to the
upcall handler in the thread library
This communication allows an application to maintain
the correct number kernel threads
Operating System Examples
Windows Threads
Linux Threads
Windows Threads
Windows implements the Windows API – primary API for Win 98, Win
NT, Win 2000, Win XP, and Win 7
Implements the one-to-one mapping, kernel-level
Each thread contains
A thread id
Register set representing state of processor
Separate user and kernel stacks for when thread runs in user
mode or kernel mode
Private data storage area used by run-time libraries and dynamic
link libraries (DLLs)
The register set, stacks, and private storage area are known as the
context of the thread
Windows Threads (Cont.)
The primary data structures of a thread include:
ETHREAD (executive thread block) – includes pointer to
process to which thread belongs and to KTHREAD, in
kernel space
KTHREAD (kernel thread block) – scheduling and
synchronization info, kernel-mode stack, pointer to TEB,
in kernel space
TEB (thread environment block) – thread id, user-mode
stack, thread-local storage, in user space
Windows Threads Data Structures
Linux Threads
Linux refers to them as tasks rather than threads
Thread creation is done through clone() system call
clone() allows a child task to share the address space of the
parent task (process)
Flags control behavior
struct task_struct points to process data structures (shared or
unique)
End of THREADING
Concepts (Ch 4)
53
Process Scheduling –
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Multiple-Processor Scheduling
• Thread Scheduling
DR. RAMESH BABU , DEPT OF AIML , RNSIT
54 Basic Concepts
CPU Burst
I/O Burst
Scheduler
Dispatcher
DR. RAMESH BABU , DEPT OF AIML , RNSIT
55 Basic Concepts
In a single-processor system, only one process can run at a time; other
processes must wait until the CPU is free. The objective of
multiprogramming is to have some process running at all times in
processor, to maximize CPU utilization.
In multiprogramming, several processes are kept in memory at one time.
When one process has to wait, the operating system takes the CPU away
from that process and gives the CPU to another process. This pattern
continues. Every time one process has to wait, another process can take
over use of the CPU. Scheduling of this kind is a fundamental operating-
system function. Almost all computer resources are scheduled before use.
The CPU is one of the primary computer resources. Thus, its scheduling is
central to operating-system design.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
56
CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU
execution and I/O wait.
The state of process under execution is called
CPU burst and the state of process under I/O
request & its handling is called I/O burst.
Processes alternate between these two states.
Process execution begins with a CPU burst. That
is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and
so on.
Eventually, the final CPU burst ends with a
system request to terminate execution as shown
in the figure:
DR. RAMESH BABU , DEPT OF AIML , RNSIT
57
CPU-I/O Burst Cycle cont..
Whenever the CPU becomes idle, the operating
system must select one of the processes from
the ready queue to be executed. The selection
process is carried out by the short-term
scheduler (or CPU scheduler). The scheduler
selects a process from the processes in memory
that are ready to execute and allocates the
CPU to that process.
A ready queue can be implemented as a FIFO
queue, a priority queue, a tree, or simply an
unordered linked list. All the processes in the
ready queue are lined up waiting for a chance
to run on the CPU. The records in the queues are
generally process control blocks (PCBs) of the
processes.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
58
Non - Preemptive Scheduling – once the CPU has been
allocated to a process, the process keeps the CPU until it
releases the CPU either by terminating or by switching to
the waiting state.
Preemptive Scheduling – The process under execution,
may be released from the CPU, in the middle of
execution due to some inconsistent state of the process.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
59 Dispatcher
Another component involved in the CPU-scheduling function is
the dispatcher. The dispatcher is the module that gives control
of the CPU to the process selected by the short-term scheduler.
This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that
program
The dispatcher should be as fast as possible, since it is invoked
during every process switch. The time it takes for the
dispatcher to stop one process and start another running is
known as the dispatch latency.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
60 Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution
per time unit
Turnaround time – amount of time to execute a particular
process
Waiting time – amount of time a process has been waiting in
the ready queue
Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Different CPU scheduling algorithms have different properties, and the choice of a particular
algorithm may favour one class of processes over another. Many criteria have been
61
suggested for comparing CPU scheduling algorithms. The criteria include the following:
1. CPU utilization - The CPU must be kept as busy as possible. Conceptually, CPU utilization
can range from 0 to 100 percent. In a real system, it should range from 40 to 90 percent .
2. Throughput - If the CPU is busy executing processes, then work is done fast. One measure
of work is the number of processes that are completed per time unit, called throughput.
3. Turnaround time - From the point of view of a particular process, the important criterion is
how long it takes to execute that process. The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing on the
CPU, and doing I/O. Time spent waiting (to get into memory + ready queue + execution +
I/O)
4. Waiting time - The total amount of time the process spends waiting in the ready queue.
5. Response time - The time taken from the submission of a request until the first response is
produced is called the response time. It is the time taken to start responding. In interactive
system, response time is given criterion.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
62 Scheduling Algorithms -
The Types of Scheduling Algorithms are
1.FCFS (First Come First Serve)
2.SJF ( Shortest Job First)
Non-preemptive SJF
Preemptive SJF (SRTF - Shortest Remaining Time First)
3.Priority
Non-preemptive priority
Preemptive priority
4.Round Robin
DR. RAMESH BABU , DEPT OF AIML , RNSIT
1. First-Come, First-Served Scheduling
Other names of this algorithm are:
63
• First-In-First-Out (FIFO)
• Run-to-Completion
• Run-Until-Done
First-Come-First-Served algorithm is the simplest scheduling algorithm. Processes are
dispatched according to their arrival time on the ready queue. This algorithm is
always nonpreemptive, once a process is assigned to CPU, it runs to completion.
Advantages :
• more predictable than other schemes since it offers time
• code for FCFS scheduling is simple to write and understand
Disadvantages:
• Short jobs(process) may have to wait for long time
• Important jobs (with higher priority) have to wait
• cannot guarantee good response time
• average waiting time and turn around time is often quite long
• lower CPU and device utilization.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
1. First-Come, First-Served Scheduling
64
Example:-
Process Burst Time
P1 24
P1 P2 P3
P2 3 0 24 27 30
P3 3
Suppose that the processes arrive in the order: P1, P2 , P3
The Gantt Chart for the schedule is:
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order P2 , P3 , P1
The Gantt chart for the schedule is:
DR. RAMESH BABU , DEPT OF AIML , RNSIT
1. First-Come, First-Served Scheduling
65 Suppose that the processes arrive in the order P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 30
6
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Here, there is a Convoy effect, as all the short processes wait for the completion of one
big process. Resulting in lower CPU and device utilization.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
2 Shortest-Job-First Scheduling
66
This algorithm associates with each process, the length of the process's next
CPU burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU bursts of two processes are the same,
FCFS scheduling is used to break the tie.
As an example of SJF scheduling, consider the following set of processes, with
the length of the CPU burst given in milliseconds:
DR. RAMESH BABU , DEPT OF AIML , RNSIT
2 Shortest-Job-First Scheduling
67 Using SJF scheduling, we would schedule these processes according to the
following Gantt chart:
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average
waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
The SJF scheduling algorithm is probably optimal, in that it gives the minimum
average waiting time for a given set of processes. Moving a short process before a
long one decreases the waiting time of the short process more than it increases the
waiting time of the long process. Consequently, the average waiting time
decreases.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
2 Shortest-Job-First Scheduling
68 ➢ Although the SJF algorithm is optimal, it cannot be implemented at the
level of short-term CPU scheduling. There is no way to know the length
of the next CPU burst. The next CPU burst is generally predicted as an
exponential average of the measured lengths of previous CPU bursts.
➢ The SJF algorithm can be either preemptive or nonpreemptive.
➢ The choice arises when a new process arrives at the ready queue
while a previous process is still executing. The next CPU burst of the
newly arrived process may be shorter than what is left of the currently
executing process.
➢ A preemptive SJF algorithm will preempt the currently executing
process, whereas a nonpreemptive SJF algorithm will allow the
currently running process to finish its CPU burst.
➢ Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
➢ As an example, consider the following four processes, with the length
of the CPU burst given in milliseconds:
DR. RAMESH BABU , DEPT OF AIML , RNSIT
2 Shortest-Job-First Scheduling
69 ➢ As an example, consider the following four processes, with the length of the CPU burst
given in milliseconds:
➢ If the processes arrive at the ready queue at the times shown and need the indicated
burst times, then the resulting preemptive SJF schedule is as depicted in the following
Gantt chart:
➢ Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives
at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time
required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is
scheduled. The average waiting time for this example is ((10 -1) + (1-1) + (17 -2) + (5-
3))/4 = 26/4 = 6.5 milliseconds.
➢ Non preemptive SJF scheduling would result in an average waiting time of 7.75
milliseconds.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
70
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
->SJF (preemptive)
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
->Average waiting time = (9 + 1 + 0 +2)/4 = 3
DR. RAMESH BABU , DEPT OF AIML , RNSIT
71 Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling
algorithm. A priority is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal-priority
processes are scheduled in FCFS order. An SJF algorithm is simply a
priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority, and vice
vers
Each process as a priority number (integer) associated with it.
The CPU is allocated to the process with the highest priority
(smallest integer highest priority) [0-highest priority & 7- least priority]
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Priority Scheduling
72
Priority scheduling can be either preemptive or nonpreemptive. When a
process arrives at the ready queue, its priority is compared with the
priority of the currently running process. A preemptive priority scheduling
algorithm will preempt the CPU if the priority of the newly arrived process
is higher than the priority of the currently running process. A
nonpreemptive priority scheduling algorithm will simply put the new
process at the head of the ready queue.
Non - Preemptive Scheduling – once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.
Preemptive Scheduling – The process under execution, may be released
from the CPU, in the middle of execution due to some inconsistent state of
the process.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
73 Non-preemptive Priority
Process Arrival Time Burst Time Priority TT = TT’ - AT WT = TT- BT
P1 0 8 3
P2 1 4 1
P3 2 9 0
P4 3 5 1
DR. RAMESH BABU , DEPT OF AIML , RNSIT
74 Non-preemptive Priority
Process Arrival Time Burst Time Priority TT = TT’ - AT WT = TT- BT
P1 0 8 3 8-0=8 8-8 = 0
P2 1 4 1 21-1=20 20 – 4 =16
P3 2 9 0 17-2 = 15 15-9 =6
P4 3 5 1 26-3 =23 23-5=18
DR. RAMESH BABU , DEPT OF AIML , RNSIT
75 Preemptive Priority
When a new process arrives –
1. Stop the execution of currently executing process
2. Compare the priority of newly arrived process and currently running
process
3. The process with higher priority is executed
Process Arrival Time Burst Time Priority TT = TT’ - AT WT = TT- BT
P1 0 8 -1 = 7 3
P2 1 4-1=3 1
P3 2 9 0
P4 3 5 1
DR. RAMESH BABU , DEPT OF AIML , RNSIT
76 Preemptive Priority
When a new process arrives –
1. Stop the execution of currently executing process
2. Compare the priority of newly arrived process and currently running
process
3. The process with higher priority is executed
Process Arrival Time Burst Time Priority TT = TT’ - AT WT = TT- BT
P1 0 8 3 26-0 = 26 26-8 = 18
P2 1 4 1 14-1=13 13-4= 9
P3 2 9 0 11-2 = 9 9-9 = 0
P4 3 5 1 19-3 = 16 16-5 = 11
Avg. TT = 16msec
Avg. WT = 9.5 msec
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Drawbacks of Priority Scheduling
77 A major problem with priority scheduling algorithms is indefinite
blocking, or starvation. A process that is ready to run but waiting for
the CPU can be considered blocked. A priority scheduling algorithm
can leave some low- priority processes waiting indefinitely.
In a heavily loaded computer system, a steady stream of higher-
priority processes can prevent a low-priority process from ever
getting the CPU.
A solution to the problem of indefinite blockage of low-priority
processes is aging. Aging is a technique of gradually increasing the
priority of processes that wait in the system for a long time.
Problem Starvation – low priority processes may never execute
Solution Aging – as time progresses increase the priority of the
process
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Round-Robin Scheduling
78
Process Arrival Time Burst Time Priority TT = TT’ - AT WT = TT- BT
P1 0 8 3
P2 1 4 1
P3 2 9 0
P4 3 5 1
DR. RAMESH BABU , DEPT OF AIML , RNSIT
79
DR. RAMESH BABU , DEPT OF AIML , RNSIT
80
DR. RAMESH BABU , DEPT OF AIML , RNSIT
81
DR. RAMESH BABU , DEPT OF AIML , RNSIT
82
DR. RAMESH BABU , DEPT OF AIML , RNSIT
83
DR. RAMESH BABU , DEPT OF AIML , RNSIT
84
DR. RAMESH BABU , DEPT OF AIML , RNSIT
85 Multilevel Queue Scheduling
Ready queue is partitioned into separate queues:
Eg - foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR 20% to background
in FCFS
Eg- Out of 50msec of processor, 40msec is given to foreground queue and
10 msec is given to background queue
DR. RAMESH BABU , DEPT OF AIML , RNSIT
86
DR. RAMESH BABU , DEPT OF AIML , RNSIT
87 Multilevel Feedback Queue Scheduling
Allows a process to move between the various queues; aging can be
implemented this way
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Multilevel Feedback Queue Scheduling
88
Normally, when the multilevel queue scheduling
algorithm is used, processes are permanently
assigned to a queue when they enter the system.
The multilevel feedback-queue scheduling
algorithm, in contrast, allows a process to move
between queues.
The idea is to separate processes according to the
characteristics of their CPU bursts. If a process uses
too much CPU time, it will be moved to a lower-
priority queue.
This scheme leaves I/O-bound and interactive
processes in the higher-priority queues. In addition, a
process that waits too long in a lower-priority queue
may be moved to a higher-priority queue. This form of
aging prevents starvation
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Multilevel Feedback Queue Scheduling
89
For example, consider a multilevel feedback-queue
scheduler with three queues, numbered from 0 to 2
(Figure). The scheduler first executes all processes in
queue 0. Only when queue 0 is empty will it execute
processes in queue 1. Similarly, processes in queue 2 will
only be executed if queues 0 and 1 are empty. A
process that arrives for queue 1 will preempt a process in
queue 2. A process in queue 1 will in turn be preempted
by a process arriving for queue 0.
This scheduling algorithm gives highest priority to any
process with a CPU burst of 8 milliseconds or less. Such a
process will quickly get the CPU, finish its CPU burst, and
go off to its next I/O burst. Processes that need more
than 8 but less than 24 milliseconds are also served
quickly, although with lower priority than shorter
processes. Long processes automatically sink to queue 2
and are served in FCFS order with any CPU cycles left
over from queues 0 and 1.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
90 Multilevel-feedback-queue scheduler defined by the following
parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that
process needs service
DR. RAMESH BABU , DEPT OF AIML , RNSIT
91 Multi Processor Scheduling
2 types
Asymmetric Multiprocessor Scheduling
Symmetric Multiprocessor Scheduling
1 1
Same Ready Queue
Different Ready Queue
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Process Affinity
92 Process have an affinity on processor
Consider what happens to cache memory when a process has been running on a
specific processor: The data most recently accessed by the process populates the
cache for the processor; and as a result, successive memory accesses by the process
are often satisfied in cache memory.
Now, if the process migrates to another processor, the contents of cache memory
must be deleted and the cache is re-populated with new process data. Because of
the high cost of deleting and re-populating caches, most SMP systems try to avoid
migration of processes from one processor to another and instead attempt to keep a
process running on the same processor. This is known as processor affinity, meaning
that a process has an affinity for the processor on which it is currently running.
Processor affinity takes 2 forms.
Soft affinity is a situation where the operating system attempts to keep a process
running on the same processor, but not guarantees the attempt. Here, it is possible for
a process to migrate between processors. This helps in load balancing among the
processors.
Hard affinity is a situation where a process is not allowed to migrate to another
processor.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
93 Load Balancing
In systems, with multiple ready queues
Processes must be evenly distributed among the processors.
Push migration
Pull migration 1 1
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Load Balancing
94
Load balancing is typically only necessary on systems where each processor has
its own private queue. On systems with a common ready queue, load balancing is
unnecessary, because once a processor becomes idle, it immediately extracts a
runnable process from the common ready queue.
There are two general approaches to load balancing: push migration and pull
migration.
In push migration, operating system periodically checks the load (number of
processes in ready queue) on each processor. If it finds an imbalance, it evenly
distributes the load by moving (or pushing) processes from overloaded to idle or
less-busy processors.
Pull migration occurs when the scheduler of an idle processor pulls a waiting task
from a busy processor. Push and pull migration need not be mutually exclusive
and may be implemented in parallel on load-balancing systems. (Eg- Linux
operating system)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
95 Process Synchronization –
Since processes frequently needs to communicate with other processes
therefore, there is a need for a well- structured communication, without using
interrupts, among processes.
A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order
in which the access takes place, is called a race condition.
To guard against the race condition ensure only one process at a time can be
manipulating the variable or data. To make such a guarantee processes need
to be synchronized in some way
DR. RAMESH BABU , DEPT OF AIML , RNSIT
96 Process Synchronization –
• The Critical-Section Problem
• Peterson’s Solution
• Synchronization Hardware
• Semaphores
• Classic Problems of Synchronization
• Monitors
DR. RAMESH BABU , DEPT OF AIML , RNSIT
97 Process Synchronization
Order of execution of the processes
Required in multiprogramming systems.
When multiple processes execute concurrently sharing system resources, then
inconsistent results might be produced.
Process synchronization is needed-
When multiple co-operative processes execute concurrently sharing some
system resources.
To avoid the inconsistent results.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Suppose 2 process P1 and P2 are 2 co-operating process, sharing the data
98 b=4;
P1 P2
{ {
a=b-1; //1 c=2*b; //3
b=2*a; //2 b=c-1; //4
} }
Values of a,b & c changes depending on the order of execution
1,2,3,4 3,4,1,2 3,1,4,2
a=3 a=6 a=3
b=11 b=12 b=6
c=12 c=8 c=8
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Suppose 2 process P1 and P2 are 2 co-operating process, sharing the data
99 count=4;
P1 P2
{ {
a=count; b=count;
a++; //5 b- -; //3
sleep(2); sleep(2);
count =a; count =b;
} }
//4
count = 5
DR. RAMESH BABU , DEPT OF AIML , RNSIT
100 Race condition
A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular
order in which the access takes place, is called a race condition.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Critical-Section Problem
101
Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has a segment
of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
Critical section is a section of the program where a process access the shared resources
during its execution.
If this shared resource is accessed by multiple processes concurrently, it leads to inconsistent
results – Critical Section Problem
“Process Synchronization”
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Critical-Section Problem
102
The critical-section problem is to design a protocol that the processes can
use to cooperate. Each process must request permission to enter its critical
section. The section of code implementing this request is the entry section.
The critical section may be followed by an exit section.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Process design -
103
Process
{
Non- critical section
Entry Section
critical section
Exit Section
Non- critical section / remainder section
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Process design -
104
Process
{ Entry Section –
• Acts as a gateway
Non- critical section • Ensures only one process is inside
• No other process
Entry Section
critical section
Exit Section
Exit Section –
• Acts as an exit gate
Non- critical section • Changes are made, so that
other process can enter CS.
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Criteria for Critical Section Problem Solutions
105
Mutual Exclusion Pi Pj
{ {
Progress
_______ _______
Bounded Wait _________ _________
entry section entry section
CS---- CS
----------- exit section
exit section __________
__________ ____________
____________
}
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
106
A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections. No other
process can enter the critical section until the process already present
inside it completes.
2. Progress. If no process is executing in its critical section and some
processes wish to enter their critical sections, then only those processes that
are not executing in their remainder sections can participate in deciding
which will enter its critical section next, and this selection cannot be
postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times
that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
107 Two general approaches are used to handle critical sections
in operating systems:
Preemptive kernels: A preemptive kernel allows a process to
be preempted while it is running in kernel mode.
Nonpreemptive kernels.. A nonpreemptive kernel does not
allow a process running in kernel mode to be preempted; a
kernel-mode process will run until it exits kernel mode, blocks,
or voluntarily yields control of the CPU.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Peterson’s Solution
108
A classic software-based solution to the critical-section problem known as Peterson’s
solution. It addresses the requirements of mutual exclusion, progress, and bounded
waiting.
It Is two process solution.
Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section. The flag array is
used to indicate if a process is ready to enter the critical section. flag[i] = true implies
that process Pi is ready.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Peterson’s Solution
109
Summary :
Software based solution to Critical Section Problem.
Restricted to 2 process.
In the entry section and exit section, the processes shares 2 variables–
int turn;
Boolean flag[2]
Information
turn - whose turn it is to enter the critical section.
flag - if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Turn i
110 Process i Process j
do { do {
flag[i] = TRUE; flag[j] = TRUE;
turn = j; turn = i;
while ( flag[j] && turn == j); while ( flag[i] && turn == i);
CRITICAL SECTION CRITICAL SECTION
flag[i] = FALSE; flag[j] = FALSE;
REMAINDER SECTION REMAINDER SECTION
} while (TRUE); } while (TRUE);
Mutual exclusion – when Pi is executing, turn =i && flag[i]=true, hence Pj is not allowed
Progress – Pj gives a chance for Pi to execute
Bounded wait – at exit section flag is set to false, which breaks the while loop of other
process.DR. RAMESH BABU , DEPT OF AIML , RNSIT
111
It proves that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Synchronization Hardware
112 A classic Software-based solutions such as Peterson’s are not guaranteed to
work on modern computer architectures. Simple hardware instructions can be
used effectively in solving the critical-section problem. These solutions are
based on the locking —that is, protecting critical regions through the use of
locks.
Solution to Critical Section problem using locks
Hardware support is used to solve the critical section problem.
Uniprocessor System
Disable interrupts, when critical section is executed.
critical section execution code is executed without preemption
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Multiprocessor system
113 Each process cannot be stopped, time consuming.
Hence, modern machines provide special atomic hardware instructions
Atomic = non-interruptable, executes at once, without break;
Either test memory word and set value
Or swap contents of two memory words
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Test And Set Instruction
114
Definition: test memory word and set value
Guaranteed by hardware, that only one process
executes this function at a time.
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Test And Set Instruction
115
Solution using TestAndSet() function
Definition: test memory word and set Shared variable lock., initialized to false.
value Process Pi: lock FALSE
do {
boolean TestAndSet (boolean *target) while ( TestAndSet (&lock ))
; /* do nothing
{
boolean rv = *target; //TRUE
// critical section
*target = TRUE;
lock = FALSE;
return rv: // remainder section
}
} while ( TRUE);
When Pi is in the CS, if Pj tries to execute, lock is TRUE, hence the function returns true.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
116 Swap Instruction
Definition: swap contents of two memory words
Guaranteed by hardware,that only one process
executes this function at a time.
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
LOCK FALSE
117 Swap Instruction
Shared variable lock initialized to FALSE;
Each process has a local variable key.
Solution using Swap():
Process(Pi):
void Swap (boolean *a, boolean *b)
do {
{
key = TRUE;
boolean temp = *a; while ( key == TRUE)
*a = *b; Swap (&lock, &key );
*b = temp:
// critical section
}
lock = FALSE;
// remainder section
DR. RAMESH BABU , DEPT OF AIML , RNSIT
} while ( TRUE);
Semaphores
118
The hardware-based solutions to the critical-section problem
are complicated as well as generally inaccessible to
S=0
application programmers. So operating-systems designers
build software tools to solve the critical-section problem, and
this synchronization tool called as Semaphore.
•Semaphore S is an integer variable
//Entry section
•Two standard operations modify S: wait() and signal() wait(S)
Critical Section
• Originally called P() and V()
//Exit section
Signal(S)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Semaphores S=0
119 It is a integer variable, which helps in process synchronization
Two standard operations are used by a semaphore :
wait() and signal() P1- CS
• Must guarantee that no two processes can execute wait () and signal P2,P3
() on the same semaphore at the same time.
Wait() /p() / down()
Signal() / v() /up()
two indivisible (atomic) operations can be written as shown below -
wait(S) {
while (S <= 0) //Entry section
wait(S)
; // no-op
S--; Critical Section
}
//Exit section
signal (S) { Signal(S)
S++;
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
120 Semaphore types -
Counting semaphore – integer value range over - ∞ to + ∞ .
Binary semaphore – integer value can take only 2 values 0 and 1.
Also known as mutex locks
Provides mutual exclusion wait (S) signal (S)
{ {
Semaphore S; // initialized to 1 while (S <= 0) S++;
wait (S); ; // no-op }
Critical Section S--;
signal (S); }
DR. RAMESH BABU , DEPT OF AIML , RNSIT
121 Semaphore: illustration -
Consider 2 concurrently running processes:
S1;
signal(synch);
In process P1,
and the statements
wait(synch);
S2;
Because synch is initialized to 0, P2 will execute S2 only after P1
has invoked signal(synch), which is after statement S1 has
been executed.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
122 Disadvanatge of semaphores
The disadvantage of the semaphore is busy waiting i.e While a process is in
critical section, any other process that tries to enter its critical section must
loop continuously in the entry code.
Busy waiting wastes CPU cycles that some other process might be able to
use productively. This type of semaphore is also called a spin lock because
the process spins while waiting for the lock.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
123
Solution to busy wait
Modify the definition of the wait() and signal()operations as follows: When a
process executes the wait() operation and finds that the semaphore value
is not positive, it must wait. Rather than engaging in busy waiting, the
process can block itself.
The block operation places a process into a waiting queue associated with
the semaphore, and the state of the process is switched to the waiting
state. Then control is transferred to the CPU scheduler, which selects
another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted
when some other process executes a signal() operation. The process is
restarted by a wakeup() operation, which changes the process from the
waiting state to the ready state. The process is then placed in the ready
queue.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
124 Semaphore Implementation with no Busy waiting
Each semaphore is associated with a waiting queue.
has a value (of type integer)
waiting list
Two operations:
block – place the process invoking the operation on the
appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and place
it in the ready queue.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Semaphore with no Busy waiting
S=-2
125 Implementation of wait:
wait (s){ P1 – CS
s.value--; P3,P4
if (s.value < 0)
{ //Entry section
add this process to waiting queue wait(S)
block(); } Critical Section
}
//Exit section
Implementation of signal:
Signal(S)
Signal (s){
s.value++;
if (s.value <= 0)
{
remove a process P from the waiting queue
wakeup(P); }
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Semaphore with no Busy waiting
126 Implementation of wait:
wait (s){
s.value--;
wakeup
if (s.value < 0)
{ //Entry section
block
add this process to waiting queue wait(S)
block(); } (unsuccessful) (successful)
}
Critical Section
Implementation of signal:
//Exit section
Signal (s){ Signal(S)
s.value++;
if (s.value <= 0)
{
remove a process P from the waiting queue
wakeup( ); }
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Classic Problems of synchronization
127
These problems are used for testing nearly every newly proposed
synchronization scheme. The following problems of
synchronization are considered as classical problems:
Bounded-Buffer Problem-P&C –BUFFER
Readers-Writers Problem- R ALL, W ONE
https://youtu.be/p2XDhW5INOo
Dining-Philosophers Problem
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Problems of synchronization
128 Bounded-Buffer Problem-P&C –BUFFER
Because the buffer pool has a maximum size, this problem is often called
the Bounded buffer problem.
This problem is generalised in terms of the Producer Consumer problem,
where a finite buffer pool is used to exchange messages between
producer and consumer processes.
Solution to this problem is, creating two counting semaphores "full" and
"empty" to keep track of the current number of full and empty buffers
respectively.
In this Producers mainly produces a product and consumers consume the
product, but both can use of one of the containers each time.
The main complexity of this problem is that we must have to maintain the
count for both empty and full containers that are available.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Problems of synchronization
129
Readers-Writers Problem- R ALL, W ONE
Suppose that a database is to be shared among several concurrent processes.
Some of these processes may want only to read the database, whereas others
may want to update (that is, to read and write) the database. We distinguish
between these two types of processes by referring to the former as readers and
to the latter as writers. Precisely in OS we call this situation as the readers-writers
problem.
Problem parameters:
• One set of data is shared among a number of processes.
• Once a writer is ready, it performs its write. Only one writer may write at a
time.
• If a process is writing, no other process can read it.
• If at least one reader is reading, no other process can write.
• Readers may not write and only read.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
130 Bounded-Buffer Problem
https://youtu.be/Qx3P2wazwI0
N buffers, each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value
N.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Bounded Buffer Problem (Cont.)
131
The structure of the producer process
do {
// produce an item
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
} while (true);
DR. RAMESH BABU , DEPT OF AIML , RNSIT
132 Bounded Buffer Problem (Cont.)
The structure of the consumer process
do {
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume the removed item
} while (true);
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Readers-Writers Problem
133
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write.
Problem – allow multiple readers to read at the same time. Only one single
writer can access the shared data at the same time.
Shared Data
Data set
Semaphore mutex initialized to 1.
Semaphore wrt initialized to 1.
Integer readcount initialized to 0.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
134 Readers-Writers Problem (Cont.)
The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (true)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
135 Readers-Writers Problem (Cont.)
The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if redacount == 0) signal (wrt) ;
signal (mutex) ;
} while (true)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Dining-Philosophers Problem
https://youtu.be/FYUi-u7UWgw
136
• The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick
between each pair of philosophers. There is one
chopstick between each philosopher.
• A philosopher may eat if he can pickup the two
chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not
both.
• This problem involves the allocation of limited resources to
a group of processes in a deadlock-free and starvation-
free manner.
Shared data
Semaphore chopstick [5] initialized to 1
DR. RAMESH BABU , DEPT OF AIML , RNSIT
137 Dining-Philosophers Problem (Cont.)
The structure of Philosopher i:
Do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (true) ;
DR. RAMESH BABU , DEPT OF AIML , RNSIT
138 Problems with Semaphores
Correct use of semaphore operations:
signal (mutex) …. wait (mutex)
wait (mutex) … wait (mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Problems with Semaphores
139 • Suppose that a process interchanges the order in which the wait() and signal() operations
on the semaphore mutex are executed, resulting in the following execution:
signal(mutex);
...
critical section
...
wait(mutex);
• Suppose that a process replaces signal(mutex) with wait(mutex). That is, it executes
wait(mutex);
...
critical section
...
wait(mutex);
In this case, a deadlock will occur.
Suppose that a process omits the wait(mutex), or the signal(mutex), or both. In this case, either
mutual exclusion is violated or a deadlock will occur.
DR. RAMESH BABU , DEPT OF AIML , RNSIT
140 Monitors
A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
141 Schematic view of a Monitor
https://youtu.be/ufdQ0GR855M
DR. RAMESH BABU , DEPT OF AIML , RNSIT
142 Condition Variables
condition x, y;
Two operations on a condition variable:
x.wait () – a process that invokes the operation is
suspended.
x.signal () – resumes one of processes (if any) tha
invoked x.wait ()
DR. RAMESH BABU , DEPT OF AIML , RNSIT
143 Monitor with Condition Variables
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Solution to Dining Philosophers
144
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT
Solution to Dining Philosophers (cont)
145
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
DR. RAMESH BABU , DEPT OF AIML , RNSIT