Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views72 pages

Chapter05 OS7e

Uploaded by

Virti Doshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views72 pages

Chapter05 OS7e

Uploaded by

Virti Doshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Chapter 5

Operatin
g Concurrency:
Systems: Mutual Exclusion
Internals
and and
Design Synchronization
Principles Seventh Edition
By William Stallings
Operating Systems:
Internals and Design Principles

“ Designing
correct routines
Multiple Processes
 Operating System design is
concerned with the
management of processes and
threads:
 Multiprogramming
 Multiprocessing
 Distributed Processing
Concurrency
Arises in Three Different
Contexts:

Multiple
Applications
Structured
invented to Applications
allow
Operating
processing System
time to be extension of Structure
shared among modular
active design and
applications structured OS
programming themselves
implemented
as a set of
processes or
threads
Concurrency & Shared
Data
 Concurrent processes may share
data to support communication,
info exchange,...
 Threads in the same process can
share global address space
 Concurrent sharing may cause
problems
 For example: lost updates
Concurrenc
Ke y
y

Te r
ms
Table 5.1 Some Key Terms Related to Concurrency
Principles of
Concurrency
 Interleaving and overlapping
 can be viewed as examples of concurrent
processing
 both present the same problems
 Inmultiprogramming, the relative speed
of execution of processes cannot be
predicted
 depends on activities of other processes
 the way the OS handles interrupts
 scheduling policies of the OS
Difficulties of
Concurrency
 Sharing of global resources
 Difficult for the OS to manage the
allocation of resources optimally
 Difficult to locate programming errors
as results are not deterministic and
reproducible
Race Condition
 Occurs when multiple processes
or threads read and write shared
data items
 The final result depends on the
order of execution
 the “loser” of the race is the
process that updates last and will
determine the final value of the
variable
Operating System
Concerns
 Design and management issues raised by the
existence of concurrency:
 The OS must:
 be able to keep track of various processes
 allocate and de-allocate resources for each
active process
 protect the data and physical resources of
each process against interference by other
processes
 ensure that the processes and outputs are
independent of the processing speed
P I
R N
O T
C E
E R
S A
S C
T
I
O
N
Resource
Competition
 Concurrent processes come into
conflict when they use the same
resource (competitively or shared)
 for example: I/O devices, memory, processor
time, clock
 Three control problems must be faced
 Need for mutual exclusion
 Deadlock
 Starvation
 Sharing processes also need to
address coherence
Need for Mutual
Exclusion
 Ifthere is no controlled access to shared
data, processes or threads may get an
inconsistent view of this data
 The result of concurrent execution will
depend on the order in which
instructions are interleaved.
 Errors are timing dependent and usually
not reproducible.
A Simple Example

 Assume P1 and P2 are


executing this code and static char a;
share the variable a
 Processes can be preempted void echo()
{
at any time. cin >> a;
 Assume P1 is preempted after
cout << a;
the input statement, and P2 }
then executes entirely
 The character echoed by P1
will be the one read by P2 !!
What’s the Problem?
 This is an example of a race condition
 Individual processes (threads) execute
sequentially in isolation, but concurrency
causes them to interact.
 We need to prevent concurrent execution
by processes when they are changing the
same data. We need to enforce mutual
exclusion.
The Critical
Problem Section

 When a process executes code that


manipulates shared data (or resources),
we say that the process is in its critical
section (CS) for that shared data
 We must enforce mutual exclusion on the
execution of critical sections.
 Only one process at a time can be in its
CS (for that shared data or resource).
The Critical Section
Problem
 Enforcing mutual exclusion guarantees
that related CS’s will be executed serially
instead of concurrently.
 The critical section problem is how to
provide mechanisms to enforce mutual
exclusion so the actions of concurrent
processes won’t depend on the order in
which their instructions are interleaved
The Critical
Problem Section

 Processes/threads must request


permission to enter a CS, & signal when
they leave CS.
 Program structure:
 entry section: requests entry to CS
 exit section: notifies that CS is completed
 remainder section (RS): code that does not
involve shared data and resources.
 TheCS problem exists on multiprocessors
as well as on uniprocessors.
Mutual Exclusion and
Data Coherence
 Mutual Exclusion ensures data coherence
if properly used.
 Critical Resource (CR) - a shared resource
such as a variable, file, or device
 Data Coherence:
 The final value or state of a CR shared by concurrently
executing processes is the same as the final value or state
would be if each process executed serially, in some order.
Deadlock and
Starvation
 Deadlock: two or more processes are
blocked permanently because each is
waiting for a resource held in a mutually
exclusive manner by one of the others.
 Starvation: a process is repeatedly denied
access to some resource which is
protected by mutual exclusion, even
though the resource periodically becomes
available.
Mutual Exclusion

Figure 5.1 Illustration of Mutual Exclusion


Requirements for
Mutual Exclusion
 Mutual Exclusion: must be enforced
 Non interference: A process that halts must not
 interfere with other processes
 No deadlock or starvation
 Progress:A process must not be denied access
to a critical section when there is no other
process using it
 No assumptions are made about relative
process speeds or number of processes
 A process remains inside its critical section for a
finite time only
Mutual Exclusion:
Hardware Support
• Interrupt Disabling • Disadvantages:
– uniprocessor system – the efficiency of
execution could be
– disabling interrupts noticeably degraded
guarantees mutual
exclusion – this approach will not
work in a multiprocessor
architecture
Mutual Exclusion:
Hardware Support
 Special Machine Instructions
 Compare&Swap Instruction
 also called a “compare and exchange
instruction”
 a compare is made between a
memory value and a test value
 if the old memory value = test value,
swap in a new value to the memory
location
 always return the old memory value
Mutual Exclusion:
Hardware Support
 Compare&Swap Instruction
 Pseudo-code definition of the
 hardware instruction:

 compare_and_swap (word, test_val,
new_val)
 if (word ==test_val)
 word = new_val;
 return new_val
Compare and Swap
Instruction word = bolt
test_val = 0
new_val = 1

If bolt is 0 when
the C&S is
executed, the
condition is false
and P enters its
critical section.
(leaves bolt = 1)
If bolt = 1 when
C&S executes, P
continues to
execute the
while loop. It’s
busy waiting ( or
spinning)
Figure 5.2 Hardware Support for Mutual Exclusion
Exchange Instruction

Figure 5.2 Hardware Support for Mutual


Special Machine
Instruction:
Advantages
 Applicable to any number of
processes on either a single
processor or multiple processors
sharing main memory
 Simple and easy to verify
 It can be used to support multiple
critical sections; each critical section
can be defined by its own variable
Special Machine
Instruction:
Disadvantages
 Busy-waiting is employed, thus while
a process is waiting for
access to a critical section it
continues to consume processor time
 Starvation is possible when a
process leaves a critical section and
more than one process is waiting
 Deadlock is possible if priority-
 based scheduling is used
Common
Concurr
ency
Mechani
sms
Semaphore
A variable that There is no way to
has an integer inspect or
value upon which manipulate
only three semaphores other
operations are than these three
defined: operations

1) May be initialized to a non-negative integer value


2) The semWait operation decrements the value
3) The semSignal operation increments the value
There is no way
to know before a
process
decrements a
semaphore
whether it will
block or not

There is no way
to know which
process will
continue
immediately on a
uniprocessor
system when two
processes are
running
concurrently

You don’t know


whether another
process is waiting
Consequences

so the number of
unblocked
processes may
be zero or one
Semaphore Primitives
Binary Semaphore
Primitives
Strong/Weak
Semaphores
 A queue is used to hold processes waiting on the
semaphore
Strong Semaphores
 the process that has been blocked the
longest is released from the queue first
(FIFO)
Weak Semaphores
 the order in which processes are removed
from the queue is not specified
Example of
Semaphore
Mechanism
Mutual Exclusion
Shared Data
Protected
by a Semaphore
Producer/Consumer
Problem

General
Situation: The Problem:
 ensure that the
 one or more
producer can’t
producers are
add data into
generating data and
placing these in a
full buffer and
buffer consumer can’t
 a single consumer is remove data
taking items out of from an empty
the buffer one at buffer
time
 only one producer or
consumer may
Buffer Structure
Incorre
ct

Solutio
n

Figure 5.9 An Incorrect Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores
Pos
sibl
e
Solu
tion
Corr
ect

Solu
tion

Figure 5.10 A Correct Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores
Se
ma
ph
ore
s
Sol
uti
on
Usi
ng
Finite
Circular
Buffer
Soluti
on
Using
Sem
apho
res

Figure 5.13 A Solution to the Bounded-Buffer Producer/Consumer Problem Using Semaphores


Implementation of
Semaphores
 Imperative that the semWait and
semSignal operations be
implemented as atomic primitives
 Can be implemented in hardware or
firmware
 Software schemes such as Dekker’s or
Peterson’s algorithms can be used
 Use one of the hardware-supported
schemes for mutual exclusion
Review
 Concurrent  Semaphores: OS
processes, threads mechanism for
 Access to shared mutual exclusion &
data/resources other
 Need to enforce synchronization
mutual exclusion issues
 Hardware  Standard
mechanisms have semaphore/counting
limited usefulness  Binary semaphore
 Producer/consumer
problem
Monitors
 Programming language construct that
provides equivalent functionality to that of
semaphores and is easier to control
 Implemented in a number of programming
languages
 including Concurrent Pascal, Pascal-Plus,
Modula-2, Modula-3, and Java
 Has also been implemented as a program
library
 Software module consisting of one or more
procedures, an initialization sequence, and
Monitor
Characteristics
Local data
variables are
accessible only
by the monitor’s
procedures and Only one process
not by any may be
external executing in the
procedure monitor at a time

Process enters
monitor by
invoking one of
its procedures
Synchronization
 Achievedby the use of condition
variables that are contained within the
monitor and accessible only within the
monitor
 Condition variables are operated on by two
functions:
 cwait(c): suspend execution of the calling
process on condition c
 csignal(c): resume execution of some process
blocked after a cwait on the same condition
Structure of a Monitor

Figure 5.15 Structure of a Monitor


blem Solution
Using a Monitor

Figure 5.16 A Solution to the Bounded-Buffer Producer/Consumer Problem Using a Monitor


Message Passing
 When processes interact with one another
two fundamental requirements must be
satisfied: communication
synchronization
 to enforce  to exchange
 Message
mutualPassing is one information
approach to
exclusion
providing both of these functions
 works with distributed systems and shared memory
multiprocessor and uniprocessor systems
Message Passing
 The actual function is normally provided in
the form of a pair of primitives:
send (destination, message)
receive (source, message)
 A process sends information in the form of a
message to another process designated by
a destination
 A process receives information by executing
the receive primitive, indicating the source
and the message
Message Passing

Table 5.5 Design Characteristics of Message Systems for Interprocess Communication and Synchronization
Synchronization
When a
is exec receive primi
uted in ti v e
of a there a a pr oc
Communication two re two ess
en possibi
message betwe es if ther lities:
processes impli messa e i s no w
synchronization blocke ge the pr aiting
o d un t i ocess
between the tw ar r i v e l a m e
is
s ss
contin or the proc age
aband ues ess
oning to execute,
the receiver t he a t
ive a tempt
cannot rece if a me receive to
til it ssage
message un by previo
usly be has
nt
has been se ess the me en s
c ssage ent
another pro rece is
execut ived and
ion con
tinues
Blocking Send,
Blocking Receive
 Both sender and receiver are blocked
until the message is delivered
 Sometimes referred to as a
rendezvous
 Allows for tight synchronization
between processes
Nonblocking Send
Nonblocking send, blocking
receive
 sender continues on but receiver is blocked until
the requested message arrives
 most useful combination
 sends one or more messages to a variety of
destinations as quickly as possible
 example -- a service process that exists to
provide a service or resource to other processes

Nonblocking send, nonblocking


receive
 neither party is required to wait
Addressing
 Schemes for specifying processes
in send and receive primitives
fall into two categories:

Direct Indirect
addressi addressi
ng ng
Direct Addressing
 Send primitive includes a specific
identifier of the destination process
 Receive primitive can be handled in
one of two ways:
 require that the process explicitly
designate a sending process
 effective for cooperating concurrent
processes
 implicit addressing
 source parameter of the receive primitive
possesses a value returned when the receive
Indirect Addressing

Messages are sent to


a shared data Queues are
structure consisting of
queues that can referred to as
temporarily hold mailboxes
messages

Allows for One process sends a


greater message to the
mailbox and the
flexibility in other process picks
the use of up the message from
messages the mailbox
Indirect Process
Communication
General Message
Format
Mutual Exclusion
Message Passing
Example

Figure 5.21 A Solution to the Bounded-Buffer Producer/Consumer Problem Using Messages


Readers/Writers
Problem
A data area is shared among many
processes
 some processes only read the data area,
(readers) and some only write to the data
area (writers)
 Conditions that must be satisfied:
1. any number of readers may
simultaneously read the file
2. only one writer at a time may write to
the file
3. if a writer is writing to the file, no
Readers Have Priority

S
o
l
u
ti
o
Figure 5.22 A Solution to the Readers/Writers Problem Using Semaphore: Readers Have Priority
Solution:
Writers Have Priority

Figure 5.23 A Solution to the Readers/Writers Problem Using Semaphore: Writers Have Priority
State of the Process Queues
essage Passing

Figure 5.24 A Solution to the Readers/Writers Problem Using Message Passing


Messages Summary
 Useful for the enforcement of mutual exclusion discipline

Operating system themes are:


 Multiprogramming, multiprocessing, distributed processing
 Fundamental to these themes is concurrency
 issues of conflict resolution and cooperation arise

Mutual Exclusion
 Condition in which there is a set of concurrent processes, only one of
which is able to access a given resource or perform a given function
at any time
 One approach involves the use of special purpose machine
instructions

Semaphores
 Used for signaling among processes and can be readily used to
enforce a mutual exclusion discipline

You might also like