2/27/24
Operating Systems
Inter-Process Communications
Dr. Youssef Iraqi
College of Computing
Email: [email protected]
Operating Systems 1
Interprocess Communication
• Why interprocess communication (IPC)?
– processes frequently need to communicate with each other
•ex: in a shell pipe, output of process A must be passed (as an input) to
process B
• Processes communicate by sharing common storage
– shared main-memory locations
– shared buffers
– shared files
• IPC causes significant problems that need to be
addressed by OS because of sharing storage
Operating Systems 2
1
2/27/24
Interprocess Communication
Example: Print Spooler
– spooler directory
•holds file names to be printed
•structured as a queue
– shared variables
•out - points to next file to be printed
•in - points to next free slot in directory
– printer daemon
•process - periodically checks to see if there are files to be
printed
–if yes, it prints them and updates variable out
Operating Systems 3
Interprocess Communication
Race Conditions
Two processes want to access a shared memory at the same time
Operating Systems 4
2
2/27/24
Interprocess Communication
Race Conditions
Operating Systems 5
Interprocess Communication
Race Conditions
Operating Systems 6
3
2/27/24
Race Conditions
• Race condition may occur
– two or more processes are reading/writing some shared
data and final result depends on who runs precisely when
• How to avoid race conditions?
– Need for mutual exclusion
•way of making sure that if one process using shared storage,
other processes will be excluded from doing same
– only necessary for sections concerned with shared
storage called critical sections or critical regions
Operating Systems 7
Critical Regions
4 conditions to provide efficient mutual exclusion
1. No two processes simultaneously in critical region
2. No assumptions made about speeds or numbers of CPUs
3. No process running outside its critical region may block
another process
4. No process must wait forever to enter its critical region
Operating Systems 8
4
2/27/24
Critical Regions
Mutual exclusion using critical regions
Operating Systems 9
Mutual Exclusion
Approaches for achieving mutual exclusion
– used to solve race conditions
– some do not completely eliminate race conditions
1. Mutual Exclusion with Busy Waiting Methods
•Disabling Interrupts
•Lock variables
•Strict Alternation
•Peterson’s Solution
•Test & Set Lock
2. Sleep and Wakeup
3. Semaphores
4. Message Passing
Operating Systems 10
10
5
2/27/24
Mutual Exclusion with Busy Waiting
Technique 1: Disabling Interrupts
– disable all types of interruptions upon entering a
critical region (CR)
•enable interruptions after exiting critical region
– unattractive method
•unwise to allow user to disable all interruptions
•what happens if user forgets to enable interruptions after
exiting CR?
– will not work in multi-CPU system
•disabling interrupts affect only CPU that executed disable
instruction
– sometimes useful within kernel
Operating Systems 11
11
Mutual Exclusion with Busy Waiting
Technique 2: Lock Variables
– use a single shared ‘lock’ variable
– when process wants to enter its CR, it first tests lock (which is
initially = 0)
– if lock = 0, process enters CR and sets lock to 1
– if lock = 1, process waits (keeps testing lock) until lock = 0
– technique has a flaw - rare situation but may happen
•if lock = 0, process A decides to enter CR but gets suspended before it
sets lock to 1
•process B may enter (since lock is 0)
•then process A wakes up and enter (lock is still 0)
•two processes in CR!
Operating Systems 12
12
6
2/27/24
Mutual Exclusion with Busy Waiting
Technique 3: Strict Alternation
(a) Process 0. (b) Process 1.
Proposed solution to critical region problem
Strict alternation
Operating Systems 13
13
Mutual Exclusion with Busy Waiting
Technique 3: Strict Alternation
– continuously test a variable to see if your turn has
come
– not a good solution
•what happens if process 0 needs to enter CR frequently but
not process 1?
•violate condition- process 0 may be blocked by process not
in CR
Operating Systems 14
14
7
2/27/24
Mutual Exclusion with Busy Waiting
Technique 4: Peterson’s Solution
– modified strict alternation - eliminates drawbacks
•process not in CR cannot block another process
– when process wants to enter CR, it first calls enter-
region()
•passes its process number as an argument
•if it is not allowed to enter, it will not return from enter-
region()
– before process leaves CR, it calls leave-region()
Operating Systems 15
15
Mutual Exclusion with Busy Waiting
Peterson's solution for achieving mutual exclusion
Operating Systems 16
16
8
2/27/24
Mutual Exclusion with Busy Waiting
Technique 5: Test and Set Lock (TSL)
– hardware-based solution
– uses a special instruction Test-and-Set-Lock
•reads contents of memory word into register
•then stores a nonzero value at that memory location
– since interrupts only occur at the end of an instruction cycle,
TSL is ‘atomic’ and cannot be interrupted
– TSL locks memory bus
•other CPUs can’t access that location until operation is complete
Operating Systems 17
17
Mutual Exclusion with Busy Waiting
Entering and leaving a critical region using the
TSL instruction
Operating Systems 18
18
9
2/27/24
Mutual Exclusion with Busy Waiting
Peterson’s solution and TSL are correct
– But both require busy waiting
•when process wants to enter CR, checks if allowed to enter
•if not, process executes loop waiting until it is
• not only is busy waiting wasteful of CPU time, it also
causes a priority inversion problem
– low priority process L is in CR
– high priority process H is busy waiting
– But since L is never scheduled while H is running, L never
gets the chance to leave its critical region, so H loops forever.
Operating Systems 19
19
Sleep and Wakeup
• Uses primitives that block instead of busy waiting
– block - meaning process goes to blocked state
– busy waiting - process is in ready or running states
• SLEEP
– system call that causes caller to block
– process suspended until another process wakes it up
• WAKEUP
– system call that wakes up or unblocks a process
• Simple implementation of sleep/wakeup still
produces race conditions
Operating Systems 20
20
10
2/27/24
Producer-consumer problem
(bounded buffer)
• Two processes share a fixed-size buffer
– producer puts information into buffer
– consumer takes information out of the buffer
Operating Systems 21
21
Producer-consumer problem
(bounded buffer)
• What happens when producer wants to put an item
in buffer but buffer is full?
– producer goes to sleep
– awakened by consumer when consumer removed items
• What happens when consumer wants to remove an
item but buffer is empty?
– consumer goes to sleep
– awakened by producer when producer has added items
Operating Systems 22
22
11
2/27/24
Sleep and Wakeup
Producer-
consumer
problem with
fatal race
condition
Operating Systems 23
23
Sleep and Wakeup
• Race condition occurs when:
– buffer is empty. Consumer runs and reads count = 0
– scheduler suspends consumer and runs producer
– producer puts item in buffer and increments count to 1
– since count = 1, producer sends WAKEUP to consumer
•consumer is not asleep, so WAKEUP is lost
– scheduler suspends producer and runs consumer
– consumer tests previously read count
•count = 0 for consumer, so consumer sleeps
– scheduler resumes producer
•producer will eventually fill buffer and sleeps
– now both producer and consumer sleep for ever!
• Better solution - Semaphores
Operating Systems 24
24
12
2/27/24
Semaphores
• Dijkstra (65) recognized problem with previous
example
– WAKEUP is lost because consumer is awake
• use an integer value called semaphore
– count number of wakeups saved for future use
– can have a value of 0 --- no wakeups saved
– can have a positive value --- number of wakeups pending
• two operations: DOWN and UP (P and V originally)
– both are atomic- indivisible operations
– DOWN: if semaphore > 0 then decrement semaphore
else process sleeps; decrement semaphore (when awakened)
– UP: increment semaphore
if any process sleeping then wakeup one process at random
Operating Systems 25
25
Producer-consumer using Sephamores
• uses 3 semaphores
– Full counting number of slots that are full (0 initially)
– empty counting number of empty slots (N initially)
– mutex ensure mutual exclusion for access of buffer
(1 initially)
• mutex is binary semaphore
– either 0 or 1
– guarantee only one process can enter CR at a given time
– down (mutex) before entering CR
if mutex = 1 then set mutex = 0 and enter
else block; set mutex = 0 when awakened and enter
– up (mutex) when leaving CR
set mutex = 1
if process blocked then unblock it
Operating Systems 26
26
13
2/27/24
Semaphores
producer-consumer
using semaphores
Operating Systems 27
27
Semaphores
• in solution to producer-consumer problem,
semaphores were used for two distinct purposes
– mutual exclusion
– synchronization
• mutual exclusion (ex: mutex)
– guarantees that only one process at a time will be reading or
writing the buffer and associated variables
• synchronization (ex: empty and full)
– guarantees certain event sequences do or do not occur
– producer stops running when buffer is full
– consumer stops running when buffer is empty
Operating Systems 28
28
14
2/27/24
Mutexes
Operating Systems 29
29
Monitors
Monitor: a high-level synchronization primitive
– A collection of procedures, variables, and data structures
– processes may call the procedures in a monitor
– only one process can be active in a monitor at any instant.
Operating Systems 30
30
15
2/27/24
Monitors
Operating Systems 31
31
Message Passing
• semaphores were designed for mutual exclusion and
synchronization in
– single CPU systems
– multi CPU systems with shared memory
• in distributed systems
– multiple CPUs each with private memory
– IPC does not use shared storage
– use of message passing for IPC
• two message-passing primitives (systems calls)
– send (destination, &message)
– receive (source, &message)
•if no message available, receiver could block until one arrives
Operating Systems 32
32
16
2/27/24
Message Passing
• Design issues:
– Messages can be lost.
– Detecting duplicates
– Naming processes
– Authentication
– Performance
Operating Systems 33
33
Producer-consumer using Message Passing
• no shared memory
• message sent but not yet received is buffered by OS
• consumer starts by sending N empty messages
• when producer has an item to give
– it takes an empty message and sends back a full one
• when consumer receives a full message
– it processes it and sends back an empty message
• if producer works faster than consumer
– producer may block (on receive) waiting for empty message
• if consumer works faster than producer
– consumer may block waiting for full message
Operating Systems 34
34
17
2/27/24
Message Passing
The producer-
consumer
problem with
N messages
Operating Systems 35
35
Message Passing Implementation
• Asynchronous message-passing
– send does not block, receive blocks
– need for buffering mechanism - called mailbox
•mailbox is a place to buffer certain number of messages
• Synchronous message-passing
– both send and receive block until there is a rendezvous
– no need for buffering
•when rendezvous occurs, send sends information to receive
• IPC between user processes in Unix can be via pipes
– pipe is effectively a mailbox
– only difference, pipe does not preserve message boundaries
•ex: if process A sends 10 messages of 100 bytes to pipe and process
B reads 1000 bytes from pipe, A gets all 10 messages
Operating Systems 36
36
18
2/27/24
The Readers and Writers Problem
• Ex: access to a database by many processes
– readers: many can read at same time
– writers: if process is changing (writing) database, no other
processes should have access to it
• Solution using semaphores
– one binary semaphore to allow writer to have exclusive access
to database
– as long as there is one reader, writer should not access
database
•use counter (shared) variable to count number of readers
•need another semaphore to get exclusive access to the shared counter
variable
Operating Systems 37
37
The Readers and
Writers Problem
solution to readers
and writers problem
Assumes readers have priority over
writers!
Operating Systems 38
38
19