Chapter 6
What is Process Synchronization?
1. Concurrent Execution
• In modern operating systems, multiple processes (think of them as programs or parts of
programs) can run at the same time.
• Even with one CPU, the CPU scheduler quickly switches between processes—this is
called context switching.
• It gives us the illusion of parallel execution, even if only one process is running at any
exact moment.
Analogy: Like a teacher helping many students one at a time but very quickly switching
between them, so it looks like all are getting help at once.
2. Interruption and Partial Execution
• A process might be interrupted before it finishes a task.
• This means another process can run before the first process completes, leading to
problems if they share data.
Example:
Imagine two people editing the same document at the same time—if one starts writing but
doesn’t finish before the other starts, the final document may be corrupted or inconsistent.
3. The Problem with Shared Data
• When multiple processes access and change the same data at the same time, we can
get unexpected results.
• This is called a race condition—the final outcome depends on the order in which
processes run.
Example:
Two bank apps trying to update your account balance at the same time. If not handled carefully,
your money might disappear or double.
4. The Solution: Process Synchronization
• We need rules and mechanisms to make sure processes work together safely.
• This is where process synchronization comes in: it’s about making processes wait their
turn so data remains consistent.
Key Concept
Process synchronization ensures that when processes share data or resources, they do so in
an orderly and safe way—so that the result is always correct, no matter how the OS switches
between them.
The Bounded-Buffer Producer-Consumer Problem
The Setup
• Think of a buffer (temporary storage) as a box with N slots.
• One process is a Producer (puts items into the box).
• The other is a Consumer (takes items out of the box).
• The goal: Let them work at the same time, but without breaking the system.
Using a Counter
• We use a variable called counter to keep track of how many items are in the buffer.
o Start with counter = 0 (buffer is empty).
o Every time the producer adds an item → counter++
o Every time the consumer removes an item → counter--
This lets us know:
• When the buffer is full (counter == N)
• When the buffer is empty (counter == 0)
The Problem: Interleaving and Race Conditions
Suppose the producer and consumer access and update counter at the same time.
What goes wrong?
Even though counter++ and counter-- look like one step, they are actually multiple machine
instructions, like:
1. Load counter into register
2. Increment or decrement
3. Store back into memory
If two processes interleave like this:
• Producer reads counter = 5
• Consumer reads counter = 5
• Producer does counter++ → 6
• Consumer does counter-- → 4
Final result should be 5, but it's 4—data inconsistency.
Why Interleaving Happens
• The operating system's CPU scheduler switches between processes at any moment.
• So the producer might be halfway through updating counter, then the CPU switches to
the consumer.
• This random switching makes the order unpredictable and causes race conditions.
Why Synchronization is Needed
To fix this, we need to make sure that:
Only one process at a time can update the shared counter.
That’s the job of synchronization tools like:
• Mutexes (mutual exclusion locks)
• Semaphores
• Monitors
These tools let us define a critical section—a block of code where only one process is allowed
to run at a time, especially when dealing with shared data.
Summary
Concept Meaning
Bounded
Fixed-size storage shared by producer and consumer
buffer
Producer Puts items in buffer
Concept Meaning
Consumer Takes items from buffer
Counter Tracks number of full slots
If both update counter at the same time, we get incorrect results (race
Problem
condition)
Solution Use synchronization to prevent simultaneous access
What is a Race Condition?
A race condition happens when:
Two or more processes access and change shared data at the same time, and the final result
depends on the timing (or “race”) between them.
Example: Back to the counter
Let’s say counter = 5.
• Producer is supposed to do counter++ → result should be 6.
• Consumer is supposed to do counter-- → result should be 4.
But what if they both run at the same time?
Here's how it might interleave:
1. Producer reads counter = 5
2. Consumer reads counter = 5
3. Producer increments → 6 (in its local register)
4. Consumer decrements → 4 (in its local register)
5. Producer stores 6
6. Consumer stores 4
Final counter = 4, which is wrong! We lost the producer’s update.
Why It’s Called a "Race"
• It’s like both processes are racing to update the same variable.
• Whoever finishes last determines the final value.
• That means the result is not reliable and can change every time you run the program.
How to Prevent Race Conditions
To fix this, we must synchronize the processes.
Synchronization ensures that only one process at a time can access and modify shared data
like counter.
This is done by wrapping the critical code (the part that updates counter) inside a critical
section — a region where only one process is allowed.
We use synchronization mechanisms to protect this section:
• Mutex (Mutual Exclusion Lock) — basic locking tool
• Semaphore — general synchronization tool
• Monitor — higher-level structure used in some programming languages
Summary Table
Concept Explanation
Race Two or more processes modify shared data at the same time, causing
condition unpredictable results
Cause Lack of control over the timing/order of execution
Effect Final result depends on which process “wins the race”
Solution Synchronize access: only one process at a time should enter the critical section
What Is the Critical Section?
Think of the critical section as a piece of code where a process:
Accesses shared resources (like a variable, file, or database table).
Since multiple processes might need the same resource, we need to protect this code from
being executed by more than one process at the same time.
Why Is This a "Problem"?
If two or more processes enter their critical sections simultaneously, and modify the same data
(like the counter), we can get:
• Inconsistent results
• Data corruption
• Unpredictable behavior
So, we must make sure that:
Only one process at a time is allowed in the critical section.
This is called mutual exclusion — a key requirement for correct synchronization.
The Structure of a Process (with Critical Section)
Every process that shares data with others is structured like this:
Entry Section
→ Code that tries to enter the critical section
Critical Section
→ Code that accesses shared data (needs to be protected)
Exit Section
→ Code that leaves the critical section
Remainder Section
→ All other code (doesn’t need synchronization)
Think of the entry section like knocking on the door to a room where only one person is
allowed inside.
The exit section is when the person leaves and opens the door for the next.
The Critical-Section Problem
The problem is:
How do we design a protocol that lets multiple processes coordinate with each other so
that only one enters the critical section at a time?
This is the main challenge in process synchronization.
We want this protocol to satisfy certain rules (more on that soon).
Visual Example
Let’s say:
• P1 and P2 are two processes that both want to update a shared file.
Without protection:
• P1 starts writing.
• At the same time, P2 starts writing.
• Now the file is broken or has incorrect data.
With a correct protocol:
• P1 enters critical section → P2 waits.
• P1 finishes and exits → P2 now enters.
This is how synchronization should work.
Summary Table
Concept Meaning
Critical Section Code that accesses shared data (needs to be protected)
Concept Meaning
Entry Section Code that checks whether it’s safe to enter critical section
Exit Section Code that signals it is done (and allows others in)
Remainder
The rest of the program (no shared data access)
Section
Design a way to ensure only one process at a time enters the critical
Problem
section
Three Requirements for Solving the Critical-Section Problem
Mutual Exclusion
Definition:
If one process is inside its critical section, then no other process is allowed to enter its critical
section at the same time.
Why it matters:
This prevents data corruption and ensures safe access to shared resources.
Real-life example:
If one person is using an ATM, no one else should be able to use that same ATM machine at the
same time. Otherwise, both could withdraw from the same account, causing incorrect balances.
Progress
Definition:
If no process is currently in the critical section, and some processes want to enter, then the
system must decide fairly and quickly who goes next.
Important condition:
Only processes that are trying to enter the critical section should be involved in this decision—
not those that are doing other things (in their remainder sections).
Why it matters:
You don’t want the system to get stuck or ignore a process that’s ready. This prevents
unnecessary delays.
Real-life example:
Two people want to enter a meeting room. If the room is empty, they should be able to enter
after deciding who goes first. Someone walking by (who doesn't want the room) shouldn’t delay
them.
Bounded Waiting
Definition:
Once a process says, “I want to enter the critical section,” it should not have to wait forever.
There must be a limit on how many other processes can enter before it gets its turn.
Why it matters:
This avoids starvation, where one process waits endlessly while others keep getting access.
Real-life example:
You get in line at a coffee shop. Even if the line is long, you should eventually get served. But if
new people keep skipping ahead, you may never get your coffee. That’s unfair — and that’s
what bounded waiting prevents.
Additional Assumptions (from your slides):
• Each process runs at nonzero speed → it won’t freeze mid-instruction.
• No assumptions are made about how fast one process is compared to another → the
protocol must work under any timing conditions.
Summary Table
Requirement Meaning Why It’s Important
Mutual Exclusion Only one process in CS at a time Prevents data corruption
If CS is free, decide fairly who enters Avoids delays when the CS is
Progress
next available
Bounded
A process won’t wait forever Prevents starvation (unfair blocking)
Waiting
Techniques for Solving the Critical-Section Problem
Software Solutions
Used when: You want to solve the CS problem using only code, without special hardware
support.
• These are usually algorithms designed to coordinate processes.
• Work well in uniprocessor systems (one CPU).
• They follow the 3 requirements: mutual exclusion, progress, and bounded waiting.
Example:
• Peterson’s Algorithm — works for two processes, uses shared variables and turn-taking.
• Dekker’s Algorithm, Lamport’s Bakery Algorithm — more complex, can work for
multiple processes.
We'll go over Peterson's Algorithm soon if you're ready.
Hardware Solutions
Used when: The hardware provides special instructions that help manage synchronization
directly.
• These are low-level machine instructions that let you control access to critical sections.
• Examples include:
o Disable interrupts (on uniprocessor): block all interrupts while in critical section
(not scalable or recommended).
o Test-and-Set instruction
o Compare-and-Swap instruction
o Exchange instruction
These are atomic operations — they happen all at once without being interrupted, which
makes them perfect for building mutual exclusion.
OS and compilers often use these to build mutexes and semaphores under the hood.
Operating System Solutions (Semaphores)
Used when: The OS provides built-in tools for process synchronization.
• These are high-level synchronization primitives.
• The most well-known is the semaphore, introduced by Dijkstra.
Types of Semaphores:
• Binary Semaphore (like a lock or mutex): value is 0 or 1
• Counting Semaphore: allows more than one process in a limited resource pool
Semaphores use two atomic operations:
• wait() (also called P): tries to enter
• signal() (also called V): signals completion
OS-level solutions are most common in real-world programs because they are efficient,
portable, and safer than writing synchronization code manually.
Summary Table
Category Description Example Techniques
Software Algorithmic solutions using shared variables Peterson’s, Dekker’s, Lamport’s
Hardware Use special atomic machine instructions Test-and-Set, Compare-and-Swap
OS Solutions OS provides synchronization tools Semaphores, Mutexes
Algorithm 1: Strict Alternation Approach
What is it?
Strict alternation is a software-based solution designed for two processes (say, P0 and P1).
It enforces that processes take turns entering the critical section, like a queue.
How It Works
We use a shared variable, usually called turn, which can be either 0 or 1.
• If turn == 0 → Process P0 is allowed to enter its critical section.
• If turn == 1 → Process P1 is allowed.
Algorithm for Process Pi:
while (turn != i)
; // busy wait
// critical section
turn = j; // j = 1 - i (the other process)
Each process waits its turn and then passes control to the other process by setting turn.
What It Gets Right
• Mutual Exclusion: Only one process can be in the critical section at a time.
• Bounded Waiting: Since they alternate, no process waits forever.
What’s the Problem?
Violates Progress:
Imagine:
• P0 finishes using the critical section.
• P1 is doing something else (not interested in entering).
• P0 wants to enter again — but it can’t, because turn == 1.
So P0 is blocked even though no one else wants the critical section. This is unfair and
inefficient.
Summary
Feature Result
Type Software solution for 2 processes
Uses Shared variable turn
Strength Simple and ensures mutual exclusion
Weakness Violates Progress (a process may be blocked unnecessarily)
Analogy
Think of two people sharing a bathroom. They’ve agreed to take turns no matter what.
• Even if one doesn’t need the bathroom, the other still must wait until it’s their turn
again.
That’s what makes strict alternation inefficient.
Algorithm 2: Improved Algorithm Using flag[2]
How It Works
We have two shared boolean variables:
bool flag[2]; // both initialized to false
• flag[i] = true means process Pi wants to enter the critical section.
• flag[j] = true means the other process wants in.
Process Pi’s Pseudocode:
do {
flag[i] = true;
while (flag[j])
; // wait (busy wait)
// Critical Section
flag[i] = false;
// Remainder Section
} while (true);
What This Algorithm Achieves
Mutual Exclusion
Yes — it is guaranteed:
• If both processes want to enter at the same time, they will both set their flag[i] = true.
• Then each one sees flag[j] == true, so they both wait.
• But that’s the problem, which we’ll now discuss.
What About the Progress Requirement?
Progress says:
If no one is in the critical section, and some processes want to enter, one of them must be
allowed in without indefinite delay.
Fails Progress
Let’s look at this scenario:
• Both P0 and P1 set their flags to true at the same time.
• Now both see the other process's flag is true, so they both wait forever (stuck in
while(flag[j])).
This creates a deadlock-like situation, even though the critical section is free.
They’re both being too polite, saying "You go first," and then both refusing to proceed.
Conclusion
Property Status
Mutual
Satisfied
Exclusion
Violated (can lead to deadlock if both processes try to enter
Progress
simultaneously)
Bounded
Not guaranteed, since a process can wait forever
Waiting
What’s Next?
To fix this problem, we can look at Peterson’s Algorithm, which uses both:
• flag[2] to indicate intention
• A new shared variable turn to break the tie and guarantee progress
Peterson’s Algorithm (for 2 processes: P0 and P1)
Shared Variables:
bool flag[2]; // intention to enter critical section
int turn; // whose turn it is
• flag[i] = true means process Pi wants to enter its critical section.
• turn = j means Pi is willing to let Pj go first if both want to enter.
Code for Process Pi:
do {
flag[i] = true; // I want to enter
turn = j; // Let the other go if he wants
while (flag[j] && turn == j)
; // wait (busy wait)
// Critical Section
flag[i] = false; // I’m done
// Remainder Section
} while (true);
In practice:
Each process signals intention to enter (by setting flag[i] = true), but also says, “I’ll wait if the
other one also wants in” (by setting turn = j).
This avoids deadlock and ensures fairness.
How Peterson’s Algorithm Satisfies the 3 Requirements
Mutual Exclusion —
• Only one process can pass the while condition and enter the critical section.
• If both processes want to enter:
o The one that set turn = j last lets the other go.
Progress —
• If no process is in the critical section, the one that wants in will eventually pass the while
loop and proceed.
• Decision is made only by processes wanting to enter, not others.
Bounded Waiting —
• Once Pi expresses interest, at most one entry by the other process can happen before Pi
gets its turn.
So no starvation: if a process wants in, it will get in within a bounded number of steps.
Summary Table
Feature Description Status
Type Software solution for 2 processes
Shared Data flag[2], turn
Mutual Exclusion Only one process in CS
Progress No unnecessary blocking
Bounded Waiting No starvation
Intuition Behind It
Peterson’s Algorithm is like:
• Each process raises its hand (sets flag[i] = true).
• Then says "you go first" by setting turn = j.
• But it only waits if the other process also wants to enter and it’s their turn.
If the other process doesn't want in, or it's your turn, you proceed.
Synchronization Hardware
These are solutions to the Critical-Section Problem that rely on special CPU instructions
designed by hardware engineers.
The key idea: use atomic instructions — operations that cannot be interrupted once they
begin.
Locks: Basic Concept
A lock is a variable or flag that a process must "acquire" before entering the critical section and
"release" when done.
do {
acquire(lock); // wait until lock is free, then take it
// critical section
release(lock); // free the lock
// remainder section
} while (true);
Why We Need Hardware Help
Without atomic operations, two processes could acquire the lock at the same time (because
lock-check and lock-set are separate steps).
To fix this, hardware provides special atomic instructions, which guarantee:
Read-modify-write happens as one unbreakable step.
Examples of Hardware Atomic Instructions
Test-and-Set
• Reads the current value of a memory location
• Sets it to a new value (like 1)
• Returns the old value
If the old value was 0 → lock was free → process gets in
If old value was 1 → someone else has the lock → process must wait
Compare-and-Swap
• Compares the value at a memory location to a given value
• If equal, it swaps in a new value
• All done atomically
Exchange (XCHG)
• Swaps the contents of a register and a memory word atomically
Example Using Test-and-Set
bool lock = false;
do {
while (test_and_set(&lock))
; // busy wait
// critical section
lock = false;
// remainder section
} while (true);
test_and_set(&lock) sets lock = true and returns the old value. If it was false, we enter;
otherwise, we wait.
What This Hardware Approach Achieves
Requirement Met? Notes
Mutual Exclusion Only one process gets the lock
Progress or Not guaranteed — busy-waiting may block others unfairly
Bounded Waiting A process could be stuck waiting forever (no fair queueing)
These solutions can cause starvation unless combined with higher-level mechanisms like
queues or fairness policies.
Summary
Feature Description
Type Hardware-level
Used in Low-level OS code, modern processor architectures
Instructions test-and-set, compare-and-swap, exchange
Goal Create a simple lock that ensures mutual exclusion
Limitation Often causes busy waiting and may violate bounded waiting without extra logic
Semaphores
What Is a Semaphore?
A semaphore is a special integer variable used to control access to shared resources in
concurrent systems.
• Invented by Edsger Dijkstra
• It’s not a regular integer — it can only be changed using two atomic operations
The Two Atomic Operations
wait(S)
Also called P(S) or down(S)
wait(S):
while (S <= 0)
; // do nothing (just wait)
S = S - 1;
This operation waits until S > 0, then decreases S by 1 and continues.
signal(S)
Also called V(S) or up(S)
signal(S):
S = S + 1;
This increments the semaphore, possibly waking up a waiting process.
Semaphore Behavior
• Think of S as the number of available resources
• When a process wants to use a resource → it calls wait(S)
• When it's done → it calls signal(S)
Example: Mutual Exclusion (Binary Semaphore)
To protect a critical section:
semaphore mutex = 1; // 1 means unlocked
Process Pi:
do {
wait(mutex); // acquire lock
// critical section
signal(mutex); // release lock
// remainder section
} while (true);
• Only one process can enter the critical section because mutex becomes 0 after the first
process enters.
• Others will wait until mutex becomes 1 again.
Types of Semaphores
Type Description
Binary Semaphore Value is 0 or 1 → acts like a lock (mutex)
Counting Semaphore Value can be >1 → controls access to multiple identical resources
Proper Behavior (OS-level Semaphores)
• If wait(S) is called and S <= 0, the process is blocked and added to a queue.
• When another process calls signal(S), it wakes up one of the waiting processes.
This solves the problem of busy waiting.
Summary
Feature Description
Semaphore Integer used to manage concurrent access
wait(S) Decreases S or blocks the process if S <= 0
signal(S) Increases S, possibly unblocking a waiting process
Binary Semaphore Like a lock (0 or 1)
Counting
Allows access to multiple resources
Semaphore
Supports mutual exclusion without busy waiting (when implemented by
Advantage
the OS)
1. Bounded-Buffer Problem (Producer-Consumer)
What’s the problem?
• You have a buffer (like a shelf) with limited slots, say N.
• Producers put items on the shelf.
• Consumers take items from the shelf.
• You must not:
o Put items if the shelf is full.
o Take items if the shelf is empty.
The challenge:
• Multiple producers and consumers run concurrently.
• Without synchronization, they might:
o Access the same buffer slot at the same time.
o Cause race conditions on the buffer index or item count.
The solution using semaphores:
We use three semaphores:
semaphore mutex = 1; // for mutual exclusion
semaphore full = 0; // number of full slots
semaphore empty = N; // number of empty slots
Producer:
do {
// produce item
wait(empty); // wait if buffer is full
wait(mutex); // enter critical section
// add item to buffer
signal(mutex); // exit critical section
signal(full); // one more full slot
} while (true);
Consumer:
do {
wait(full); // wait if buffer is empty
wait(mutex); // enter critical section
// remove item from buffer
signal(mutex); // exit critical section
signal(empty); // one more empty slot
// consume item
} while (true);
2. Readers-Writers Problem
What’s the problem?
• A shared resource (like a file or database) is accessed by:
o Readers → only read data.
o Writers → modify data.
The challenge:
• Multiple readers can read at the same time (no problem).
• But if a writer is writing, no one else (not even readers) should access it.
Goals:
• Allow maximum reading concurrency.
• Ensure no reader or writer starves.
• Avoid race conditions.
Variants:
• First readers–writers problem: No reader is kept waiting unless a writer has already
obtained permission to write.
• Second readers–writers problem: Once a writer is ready, it is given priority over new
readers.
Basic solution idea using semaphores (1st version):
int read_count = 0;
semaphore mutex = 1; // protect read_count
semaphore wrt = 1; // for writer access
Reader:
wait(mutex);
read_count++;
if (read_count == 1)
wait(wrt); // first reader locks writers
signal(mutex);
// read data
wait(mutex);
read_count--;
if (read_count == 0)
signal(wrt); // last reader releases writer
signal(mutex);
Writer:
wait(wrt);
// write data
signal(wrt);
3. Dining Philosophers Problem
What’s the problem?
• Five philosophers sit around a circular table.
• Each philosopher alternates between thinking and eating.
• There is one fork between each pair, so 5 forks total.
• A philosopher needs both left and right forks to eat.
The challenge:
• If every philosopher picks up their left fork at the same time, they’ll all wait forever for
the right fork → deadlock.
• If some philosophers always get preference, others starve → starvation.
Goals:
• Avoid deadlock and starvation.
• Allow maximum parallelism (let as many eat as safely possible).
Solution sketch (with semaphores):
semaphore forks[5] = {1, 1, 1, 1, 1};
Philosopher i:
do {
think();
wait(forks[i]); // pick up left fork
wait(forks[(i+1)%5]); // pick up right fork
eat();
signal(forks[i]); // put down left fork
signal(forks[(i+1)%5]); // put down right fork
} while (true);
Problem:
This still risks deadlock.
Common fix:
Only 4 philosophers allowed to try eating at the same time. Or make one philosopher pick up
right fork first.
Summary Table
Problem Goal Risk Solution Highlights
Synchronize producer and Semaphores (full, empty,
Bounded Buffer Race conditions
consumer mutex)
Allow many readers, one Data Reader counter +
Readers-Writers
writer inconsistency semaphores
Dining Avoid deadlock while Deadlock, Order fork pickup, limit
Philosophers sharing forks starvation philosophers
What Are Critical Regions?
A critical region (or conditional critical region) is a high-level language construct used to
synchronize access to shared variables in concurrent programs.
It abstracts away the low-level details of acquiring and releasing locks, so the programmer can
focus on what to synchronize rather than how.
Syntax and Behavior
region v when B do S
• v: a shared variable (must be declared as shared T).
• B: a boolean condition (guard) to check before entering the region.
• S: the code (critical section) that runs exclusively when B is true.
Only one process can execute a region v ... block at a time. Others are delayed (blocked) until:
1. B becomes true,
2. And no other process is in a critical region using v.
Mutual Exclusion is Automatic
• You do not need to declare or use semaphores or locks.
• The compiler or runtime system automatically ensures that only one process accesses v
at a time.
Example: Bounded-Buffer with Critical Regions
Suppose you have a shared buffer, and a producer/consumer system. You might write:
region buffer when count < N do
// add item to buffer
count := count + 1;
region buffer when count > 0 do
// remove item from buffer
count := count - 1;
• The producer waits until count < N (buffer not full).
• The consumer waits until count > 0 (buffer not empty).
• buffer is the shared resource protected by these regions.
How It Works Internally
Internally, the system uses locks and condition variables to implement the behavior:
• Evaluates the condition B.
• If B is false, the process is placed into a wait queue.
• Once B becomes true and no one else is in the region, the process proceeds.
This logic is similar to a monitor with condition variables, as described by Hoare.