MODULE 3 Os
MODULE 3 Os
3
SNEHA G
ASSISTANT PROFESSOR
DEPT. OF CSE
DR AIT
Outline
• Background
• The Critical-Section Problem
• Peterson’s Solution
• Hardware Support for Synchronization
• Mutex Locks
• Semaphores
• Monitors
OBJECTIVES
• Describe the critical-section problem and illustrate a race condition
• Illustrate hardware solutions to the critical-section problem using memory
barriers, compare-and-swap operations, and atomic variables
• Demonstrate how mutex locks, semaphores, monitors, and condition
variables can be used to solve the critical section problem
• Evaluate tools that solve the critical-section problem in low-, Moderate-,
and high-contention scenarios
BACKGROUND
• Process Synchronization is the coordination of execution of multiple processes in
a multi-process system to ensure that they access shared resources in a controlled
and predictable manner.
• It aims to resolve the problem of race conditions and other synchronization
issues in a concurrent system.
BACKGROUND
• A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating
process
P1 P2
• DISADVANTAGES: Overhead ,
deadlock, can limit parallelism and can REMAINDER
SECTION
cause contention
critical section (C..)
Solution to the Critical Section Problem:
• Mutual Exclusion: Mutual exclusion implies that only one process can be inside
the critical section at any time. If any other processes require the critical section,
they must wait until it is free.
• Progress: Progress means that if a process is not using the critical section, then it
should not stop any other process from accessing it. In other words, any process
can enter a critical section if it is free.
• Bounded Waiting: Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical section.
PETERSONS PROBLEM
• A crucial component of process synchronization, the mutual exclusion issue has a well-
known solution in Peterson's Algorithm.
• This mutual exclusion algorithm, developed by Gary Peterson in 1981
• Peterson’s Solution is a classical software-based solution to the critical section problem.
• It is restricted to two processes that alternate execution between Ptheir critical section and
remainder section. Lets call the processes and i
• In Peterson’s solution, we have two shared variables:
• Boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn: The process whose turn is to enter the critical section.
PETERSONS PROBLEM
Pi Pj
do{
flag[j]=true;
turn=i;
while(flag[i]&&turn==i);
Critical
Section
flag[j]=false;
Remainder
section
}while(TRUE);
PETERSONS PROBLEM
PETERSONS PROBLEM
Peterson’s Solution preserves all three conditions:
• Mutual Exclusion is assured as only one process can access the critical section at any time.
• Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
• It involves busy waiting.
• It is limited to 2 processes.
• Peterson’s solution cannot be used in modern CPU architectures.
Hardware Synchronization
• When two processes running concurrently share the same data or same variable. The value of
that variable may not be updated correctly before its being used by a second process. Such a
condition is known as Race Around Condition.
• Most efficient hardware solution to process synchronization problems and its
implementation.
• There are three algorithms in the hardware approach of solving Process Synchronization
problem:
1. Test and Set
2. Swap
3. Unlock and Lock
Hardware Synchronization(C..)
TEST AND SET: Lock variable False
• Lock variable determines Entry of process to critical section.
• As one process is inside critical section, then lock True
• If any other process tries to enter the critical section then new process checks for
while(Test and Set(true))which will return true inside while loop and as a result other
process keeps executing the while loop.
Hardware
Synchronization(C..)
TEST AND SET:
Definition: Solution:
boolean lock;
boolean TestAndSet (boolean while(1)
*target) {while(Test and
{ Set(&lock));
CRITICAL
boolean rv = *target; SECTION
*target = true;
return rv; lock = false;
REMAINDER
} SECTION
}
Hardware Synchronization(c..)
TEST AND SET:
• Bounded waiting condition: If a process waits for a set amount of time before entering the critical
section
While(true);
while(1)
{key=true;
boolean lock = false; while(key)
individual key = false; {
void swap(boolean *a,boolean *b) swap(&lock,&key);
{ }
CRITICAL
Boolean temp =*a; SECTION
*a=*b; lock=false;
*b=temp;
} }
Hardware
Synchronization(c..)
UNLOCK AND LOCK ALGORITHM: uses the Test And Set method to control the
value of lock.
• Unlock and lock algorithm uses a variable waiting[i] for each process i.
• waiting[i] checks if the process i is waiting or not to enter into the critical section.
• All the processes are maintained in a ready queue before entering into the critical
section.
• The processes are added to the queue with respect to their process number. The
queue is the circular queue.
Hardware
Synchronization(c..)
boolean lock = false; CRITICAL
individual key = false; SECTION
individual waiting[i]; j=(i+1)%n;
while(1) while(j!=i && !waiting[j])
{ j=(j+1)%n;
waiting[i]=true; if(j==i)
Key=true; lock=false;
while(waiting[i] && key) else
key = Test and Set(lock); waiting[j]=false;
waiting[i]=false; REMAINDER
SECTION
Hardware Synchronization(c..)
Advantages of Hardware Instruction
• Hardware instructions are easy to implement and improves the efficiency of the system.
• Supports any number of processes may it be on the single or multiple processor system.
• With hardware instructions, you can implement multiple critical sections each defined with a unique
variable.
Disadvantages of Hardware Instruction
• Processes waiting for entering their critical section consumes a lot of processors time which increases busy
waiting.
• As the selection of processes to enter their critical section is arbitrary. It may happen that some processes
are waiting for the indefinite time which leads to process starvation.
• Deadlock is also possible.
semaphore
• Semaphores proposed by Edsger Dijkstra
• It is a technique to manage concurrent processes by using a simple integer value
• It is a variable which is non-negative and shared between threads
• This variable is used to solve the critical section problem and to achieve process
synchronization in the multiprocessing environment.
• It consist of two standard atomic operation: wait() and signal()
• Wait( ) P( ) Degrade( )
• Signal( ) V( ) Upgrade( )
semaphore
DEFINITION OF
DEFINITION OF wait( )
signal( )
wait ( Semaphore S)
Signal (Semaphore S)
{
{
while(S<=0);
S++;
S--;
}
}
semaphore
Note:
• All the modifications to the integer value of the semaphore in the wait( ) and signal( )operations
must be executed indivisibly.
• When one process modifies the semaphore value, no other process can simultaneously modify
that some Semaphore value
• They are used to enforce mutual exclusion, avoid race conditions and implement
synchronization between processes.
semaphore
Types of Semaphores:
Counting Semaphores: It is a integer value and have an unrestricted value domain.
• These semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources.
• If the resources are added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
Binary Semaphores: The binary semaphores are like counting semaphores but their value is restricted to 0
and 1.
• The wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0.
• It is sometimes easier to implement binary semaphores than counting semaphores.
semaphore
ADVANTAGES:
• Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as processor
time is not wasted unnecessarily to check if a condition is fulfilled to allow a
process to access the critical section.
• Semaphores are implemented in the machine independent code of the microkernel.
So they are machine independent.
semaphore
DISADVANTAGES:
• Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
• The operating system has to keep track of all calls to wait and signal the
semaphore.
• Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.
semaphore
CLASSICAL PROBLEM OF SYNCHRONIZATION: DINING-
PHILOSOPHER PROBLEM: PHILOSPHER
EITHER
THINKS EATS
• Hungery: he tries to pick up the
two forks that are closest to him.
• He can pickup only one fork at a
time
When a philosopher thinks, • One cannot pickup a fork, already
he does not interact with his in hands of another
colleagues • When philosopher has both fork
at the same time, he eats and
release
semaphore
semaphore
Solution: Do{
Wait(chopstick[i]);
1. There should be at most (k-1) philosophers on Wait(chopstick[(i+1)%5]);
the table. .
.
2. A philosopher should only be allowed to pick Eating
their chopstick if both are available at the same Signal(chopstick[i]);
Signal(chopstick[(i+1)%5]);
time .
3. Odd philosopher should pick the left chopstick Think
}while(true);
first and then right next; while others will pick
the right one first then left one.
semaphore
CLASSICAL PROBLEM
OF SYNCHRONIZATION:
bounded -buffer
PROBLEM
CLASSICAL PROBLEM
OF SYNCHRONIZATION:
Reader – Writer
PROBLEM:
semaphore
Reader – Writer PROBLEM:
Three variables are used: mutex, wrt, readcnt to implement solution
Semaphore- mutex: It is used to ensure mutual exclusion
Semaphore- int wrt: When readcnt is updated i.e. when any reader enters or exit
from the critical section and semaphore wrt is used by both readers and writers
int readcnt readcnt tells the number of processes performing read in the critical
section, initially 0
semaphore
Reader – Writer PROBLEM:
Writer Reader
wait(mutex);
Wait(wrt); readcnt++;
. wait(mutex);
if(readcnt==1)
. readcnt--;
wait(wrt);
Write into the object if(readcnt==0)
signal(mutex);
. signal(wrt);
.
. signal(mutex);
.
Signal(wrt); Read the Object
semaphore
Reader process:
1. Reader requests the entry to critical section.
2. If allowed:
• It increments readcnt++, inside the critical section.
• If this reader is the first reader entering, it locks the wrt semaphore to restrict the entry of writers if
any reader is inside.
• It then, signals mutex as any other reader is allowed to enter while others are already reading.
• After performing reading, it exits the critical section.
• When exiting, it checks if no more reader is inside, it signals the semaphore “wrt” as now, writer can
enter the critical section.
3. If not allowed, it keeps on waiting.
semaphore
• Writer process:
1. Writer requests the entry to critical section.
2. If allowed i.e. wait() gives a true value, it enters and performs the write. If
not allowed, it keeps on waiting.
3. It exits the critical section.
semaphore
Bounded buffer Problem:
The Producer tries to insert data into empty slot of the buffer
The Consumer tries to remove data from a filled slot in the buffer
The Producer must not insert data when the buffer is full
The Consumer must not remove data when the buffer is empty
The Producer and Consumer should not insert and remove data simultaneously
semaphore
Bounded buffer Problem:
Semaphores refer to the integer variables that are primarily used to solve the critical
section problem via combining [access control]
Two procedures, wait and signal, for the process synchronization
The wait operation decrements the value of the semaphore
The signal operation increments the value of the semaphore
When the value of the semaphore is zero, any process that performs a wait
operation will be blocked until another process performs a signal operation.
semaphore
Bounded buffer Problem:
Three semaphores used
m (mutex) a binary semaphores which is used to acquire and release
the lock
empty a counting semaphore whose initial value is the number of
slots[n] in the buffer, since, initially all slots are empty
full a counting semaphores whose initially value is 0.
semaphore
do
do
{
{
mutex = wait(mutex);
mutex = wait(mutex);
full = signal(full);
full = wait(full);
empty = wait(empty);
empty = signal(empty);
x++;
printf(“\n Consumer consumes item = %d”, x);
printf(“\n Producer produces the item = %d”, x);
mutex = signal(mutex);
mutex = signal(mutex);
}while(true);
}while(true);
semaphore
Advantages:
• Enforce mutual exclusion to prevent race conditions.
• Synchronize process execution.
• Prevent deadlocks.
• Efficiently manage system resources.
Disadvantages:
• Implementation is Difficult: The legitimate utilization of semaphores can be troublesome and
mistake-inclined.
• Potential Deadlock and Livelock
• Synchronization overhead
MONITORS
• Monitors are a synchronization tool used in process synchronization to manage access to shared
resources and coordinate the actions of numerous threads or processes.
• A monitor is essentially a module that encapsulates a shared resource and provides access to that
resource through a set of procedures.
• The procedures provided by a monitor ensure that only one process can access the shared resource
at any given time, and that processes waiting for the resource are suspended until it becomes
available.
• Monitors are used to simplify the implementation of concurrent programs by providing a higher-
level abstraction that hides the details of synchronization.
• Monitors provide a structured way of sharing data and synchronization information, and eliminate
the need for complex synchronization primitives such as semaphores and locks.
.
MONITORS
1.It is the collection of condition variables and
procedures combined together in a special kind of
module or a package.
2.The processes running outside the monitor can’t
access the internal variable of the monitor but can
call procedures of the monitor.
3.Only one process at a time can execute code inside
monitors.
MONITORS
MONITORS
Condition Conduct :Condition x ,y;
Two different operations are performed on the condition variables of the
monitor Wait and Signal
The operation x.wait(); process invoking this operation is suspended until
another process invokes x.signal();
The x.signal(); operation resumes exactly one suspended process.
MONITORS
Dining –Philosopher Solution using Monitor:
• Monitor concepts by presenting a dead-lock solution to Dining philosophers problem
• Solution: Philosophers restriction to pick up his chopsticks only if both of them are available
• To code this solution: Philosopher is distinguish among three state Data structure
enum { Thinking, Hungry, Eating} state[5];
• Philosopher i can set the variable state[i] = eating only if his two neighbours are not eating :
( state [(i+4)%5] != eating) and ( state [(i+1)%5] != eating)
• Condition self [5]; philosopher i can delay himself when he is hungry but is unable to obtain
the chopsticks he needs
MONITORS - DPP
Monitor dpp{ Void test (int i) {
Enum{THINKING, HINGRY, if (state [(i+4)%5] != EATING) &&
EATING}state[5]; ( state [i] = HUNGRY) &&
Condition self[5]; (state [(i+1) % 5} != EATING)) {
Void pickup (int i){ State [i] = EATING;
State[i]= HUNGRY; 3
Test(i); }
If(state[i]!=EATING) }
Self[i].wait();} initializaion_code() {
Void putdown(int i) { for ( int i = 0; i < 5; i++)
State [i] = THINKING; State [i] = THINKING;
Test((i+4)%5); }
Test(i+1)%5); } }
MONITORS
Bounded buffer Problem:
PRODUCER CONSUME
R
IF BUFFER IS EMPTY
IF BUFFER IS FULL THEN :
THEN : CONSUMER WAIT
PRODUCER WAIT Consumer()
Producer() {
{ Access item from buffer
Add item to buffer }
}
MONITORS
Bounded buffer Problem:
Count
CONSUMER
MONITORS
Monitor bbp{
int buffer[5]
int count Consume()
Condition full {
Condition empty If count == 0 then
Producer() empty.wait
{ Access item from buffer
If count == 5 then full.wait Decrease value of count
Add item to buffer Full.signal
Increase count value }
empty.signal
}
Mutex lock
• It makes it possible to implement mutual exclusion
• It avoid simultaneously acquire the lock by threads or processes.
• A single thread or procedure has to first try to obtain the mutex lock for something that is
shared before it can access it.
• The seeking string or procedure gets halted and placed in a state of waiting as long as the
encryption key turns into accessible if it is being organized by a different thread or process.
• The thread or process is able to use the resource that has been shared after acquiring the lock.
• When finished, it introduces the lock so that different threads or processes can take
possession of it.
Mutex lock
Mutex lock
LOCK = 0
P1 P2
Lock = 0
BOTH P1 While( lock !=
PROCESS 0); CONTEXT
ENTER CS!! SWITCH
P2 While( lock !=
0);
Lock = 1;
Mutex lock
LIMITATION:
Require busy waiting
Wastes CPU cycle
ADAVANTAGES:
If lock held for short duration,
Context switching not required
One thread can spin wait in while core whole other uses CS
DEADLOCK
introduction
A process in operating system uses resources in the following
way.
1.Requests a resource
2.Use the resource
3.Releases the resource
What is deadlock ??
A deadlock is a situation in which
more than one process is blocked
because each process is holding a
resource and also waiting for some
resource that is acquired by some
other process. Therefore, none of the
processes gets executed.
deadlock
Four Necessary Conditions
• Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-
sharable.
• Hold and wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.
• No preemption: The resource can be released by a process voluntarily i.e. after execution of the
process.
• Circular Wait: A set of processes are waiting for each other in a circular fashion. For example Lets
say there are a set of processes {P0,P1,P2,P3} such that P0depends on P1, P1depends on P2, P2
depends on P3and P3depends on P0. This creates a circular relation between all these processes and
they have to wait forever to be executed.
Methods for handling
deadlock
Deadlock prevention or avoidance
Deadlock ignorance
Methods for handling
deadlock
Prevention:
It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock.
If there is even a slight chance that a transaction may lead to deadlock in the future, it is
never allowed to execute.
Prevention can be done in four different ways:
• Eliminate mutual exclusion
• Solve hold and Wait
• Allow preemption
• Circular wait Solution
Methods for handling
deadlock
• Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to
make an assumption.
• We need to ensure that all information about resources that the process will need is
known to us before the execution of the process.
• We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid
deadlock.
• In prevention and avoidance, we get the correctness of data but performance
decreases.