PROCESS MANAGEMENT
Process concepts
Coordination of processes
Communication between processes
Synchronize processes
Deadlock
Process
The process is a processing program
Each process has an address space, a command pointer, a
separate set of registers and stacks.
The process may require some resources such as CPU,
main memory, files and I/O devices.
The operating system uses a scheduler to determine
when to stop the processing of the processing process
and select the next process to be performed.
In the system, there are processes of the operating
system and the users.
Purpose for multiple processes working
simultaneously
Increased CPU usage performance (increased
multi-chapter level)
Increased multitasking level
Increase processing speed
Increased CPU usage performance (increased
multi-chapter level)
Most of the execution processes undergo many processing
cycles (using CPU) and I/O cycles (using I/O devices)
alternating as follows:
If there is only 1 process in the system, then in the IO cycles of
the process, the CPU will be completely idle. The idea of
increasing the number of processes in the system is to utilize
the CPU: if process 1 execute IO, then the operating system
can use the CPU to perform process 2 ...
Increased multitasking level
For each process to rotate in a very short time, giving
the impression that the system has many processes
executing simultaneously.
Increase processing speed
Some problems can be processed in parallel, if built
into many units operating at the same time, it will
save processing time.
For example, consider the calculation of expression
value kq = a * b + c * d. If (a * b) and (c * d) are
performed concurrently, the processing time will be
shorter than a sequential execution.
Thread
A process can create multiple threads.
Each thread performs a function and executes it
simultaneously by sharing the CPU.
Threads in the same process share process address
space but have their own instruction pointers, set of
registers, and stack.
A thread can also create sub-threads, and take
different states as a process.
Communication between threads
Processes can communicate with each other only through
mechanisms provided by the operating system.
Threads communicate easily through the process's global variables.
Threads can be managed by the operating system or by the
operating system and process jointly managed
Thread example
Thread example
A process has three threads, each thread has its own stack
Thread installation
Installation in Kernel-space
Installation in User-space
Installation in Kernel-space and User-space
Thread installation in Kernel-space
The thread table is stored in the Kernel-space.
Threads are coordinated by the operating system.
Thread installation in User-space
The thread table is stored in the User-space.
Threads are coordinated by the process itself.
Thread installation in Kernel-space and User-space
An operating system thread manages several threads of the process
Thread coordination example
quantum of process=50 msec
quantum of thread=5 msec
Process A has 3 threads, Process B has 4 threads.
Thread coordinating is done at Thread coordinating is done at
user-space level kernel-space level
Process states
New: The process is being created
Running: Instructions are being executed
Waiting: The process is waiting for some event to occur
Ready: The process is waiting to be assigned to a processor
Terminated: The process has finished execution
Processing mode of the process
Two processing modes:
• Privileged mode
• Non-privileged mode
Data structure of process control block
Process control block (PCB): is a
memory area that stores descriptive
information for the process:
Process identifier (1)
Process status (2)
Context of the process (3):
CPU status, Processor, Main memory,
Used resources, Created resources
Communication information (4):
Parent process, Child process, Priority
Statistical information (5)
PCB
Operations on processes
Create process (create)
Destroy process (destroy)
Suspend process (suspend)
Resume process (resume)
Change process priority (change priority)
Create process (create)
Identify for new processes
Put the process into the management list of the
system
Determine the priority for the process
Create PCB for the process
Allocate initial resources for the process
Destroy process (destroy)
Revoke system resources allocated of the process
Destroy the process from all system management
lists
Destroy the PCB of the process
Allocate resources for the process
Resource management block
The objectives of the allocation technique:
Ensuring a valid number of concurrent access processes to non-shared
resources.
Allocating resources for the process required during an acceptable delay.
Optimize resource use.
Process Scheduling
The operating system coordinates the process through the
scheduler and the dispatcher.
The scheduler uses an appropriate algorithm to select the next
processed process.
The dispatcher is responsible for updating the context of the
suspended process and assigning the CPU to the process selected
by the scheduler for the execution process.
Coordination objectives:
Fairness (Fairness)
Effectiveness (Efficiency)
Reasonable response time (Response time)
Time saved in the system (TurnaroundTime)
Maximum throughput (Throughput)
Process characteristics
Oriented I/O ( I/O-boundedness):
Multiple CPU usage times, each using a short time.
Oriented execution ( CPU-boundedness):
Less CPU usage times, long time usage each.
Process of interaction or batch processing
Priority of the process
Time used by the process's CPU
The time left for the process to complete
Scheduling principles
Exclusive coordination (preemptive)
CPU monopoly
Not suitable for multi-user systems
Non-exclusive coordination (nonpreemptive)
Avoid a CPU monopoly process
Can lead to inconsistencies in retrieval -> need
appropriate synchronization methods to resolve.
Complex in prioritizing
Generate additional costs when switching CPU between
processes
Timing of Scheduling
running -> blocked
for example, waiting for an I/O manipulation or waiting for a
child process to finish ...
running -> ready
for example, when an interrupt occurs.
blocked -> ready
for example, when an I/O manipulation is finished.
The process finish.
A process with a higher priority appears
Applies only to non-exclusive coordination
Scheduling lists
Job list
Ready list
Waiting list
ds sẵn sàng CPU
Yêu cầu
I/O ds đợi tài nguyên
tài nguyên
Hết
thời gian
Ngắt xảy ra
Đợi 1 ngắt
Types of scheduling
Job scheduling
Select which job is put in the main memory to perform
Determines the degree of multiplication
Low frequency of operation
Process scheduling
Select a process available (loaded in main memory, and
have enough resources to run) to allocate CPU for that
process to execute.
Have a high operating frequency (1 time / 100 ms).
Use the best algorithms
Scheduling algorithms
FIFO algorithm
Round Robin algorithm
Priority algorithm
Shortest-job-first algorithm (SJF)
Multiple priority algorithm
Lottery Scheduling Strategy (Lottery)
FIFO algorithm
Preemptive Scheduling
Process Time to enter RL Processing Time P1 P2 P3
P1 0 24 0 24 27
P2 1 3 30
P3 2 3
The waiting time is processed as 0 for P1, (24 -1) for P2
and (27-2) for P3.
The average waiting time: (0 + 23 + 25) / 3 = 16
milliseconds.
Round Robin algorithm
Process Time to enter RL Processing Time
P1 0 24
P2 1 3
P3 2 3
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Quantum: 4 milliseconds
The average waiting time: (6+3+5)/3 = 4.66
milliseconds.
Priority algorithm
Priority: P2 > P3> P1
Proces Time to Priority Processing Preemptive Priority Algorithm
s enter RL Time
P1 P2 P3
P1 0 3 24
0 24 27 30
P2 1 1 3
P3 2 2 3
The average waiting time:
(0+23+25)/3 =16 milliseconds
Non-Preemptive Priority Algorithm
P1 P2 P3 P1
0 1 4 7 30
The average waiting time:
(6+0+2)/3 = 2.7 milliseconds
Shortest-job-first algorithm (SJF)
t: the processing time required by the process
Priority p = 1/t Preemptive SJF Algorithm
Tiến Thời điểm Thời gian P1 P4 P3 P2
trình vào RL xử lý
0 6 8 12 20
P1 0 6
P2 1 8
The average waiting time:
P3 2 4 (0+11+6+3)/4 = 5 milliseconds
P4 3 2
Non-Preemptive SJF Algorithm
P1 P4 P1 P3 P2
0 3 5 8 12
20
The average waiting time:
(2+11+6+0)/4 = 4.75 milliseconds
Multiple priority algorithm
The ready list is divided into several lists.
Each list consists of processes that have the same priority
and have their own scheduling algorithm.
Multilevel Feedback scheduling algorithm
Lottery Scheduling Strategy (Lottery)
Each process is given a "lottery".
The OS chooses a "winning" ticket, the process that
owns this ticket will receive the CPU.
Preemptive algorithm.
Simple, low cost, fairness of processes.
COMMUNICATE BETWEEN PROCESSES
Purpose:
to share information such as file sharing, memory, ...
cooperating to complete the job
Mechanisms:
Communication by signal (Signal)
Communication by pipe (Pipe)
Communication via shared memory (shared memory)
Communication by message (Message)
Communication by socket
Communication by signal (Signal)
Signal Describe
SIGINT Users press Ctrl-C to stop processing the process
SIGILL Process execute an invalid instruction
SIGKILL Request to end a process
SIGFPT Error divided by 0
SIGSEGV The process of retrieving to an invalid address
SIGCLD Child process ends
Signals are sent by: When the signal receiving process:
- Hardware - Call the signal processing function.
- Operating system - Process according to the way of the
- Process process.
- User - Ignore the signal.
Communication by pipe (Pipe)
Data transfer: stream of bytes (FIFO)
The pipe read process will be blocked if the pipe is empty, and wait until the
pipe has data to be retrieved.
The pipe write process will be locked if the pipe is full, and wait until the pipe
has room to store data.
Communication via shared memory (shared
memory)
Shared memory is independent of processes
The process must mount the common memory into the
process's private address space
Shared memory is:
- The fastest method to exchange data between processes.
- need to be protected by synchronization mechanisms.
- cannot be effectively applied in distributed systems
Communication by message (Message)
Establish a link between the two processes
Use send and receive functions provided by the
operating system to exchange messages
How to communicate by message:
Indirect communication
Send (A, message): sends a message to port A
Receive (A, message): receive a message from port A
Direct communication
Send (P, message): send a message to process P
Receive (Q, message): receive a message from the Q process
Example: producer-consumer problem
void producer()
{ while(1)
{ create_product ();
send(consumer, product); //send product to consumer
}
}
void consumer()
{ while(1)
{ receive(producer, product); //consumer waiting to receive product
consume (product);
}
}
Communication by socket
Each process needs to create a separate socket
Each socket is tied to a different port.
The read / write operations on the socket are the exchange of data
between two processes.
How to communicate by socket
Mail contact (socket acts as a post office)
"Sending process" writes data to its socket, the data will be transferred to the
socket of "receive process"
"Receive process" will receive data by reading data from the socket of "receive
process"
Telephone communication (socket plays the role of switchboard)
Two processes that need to be connected before data transmission / reception
and connection are maintained during data transmission
PROCESS SYNCHRONIZATION
Ensure parallel processing processes do not mislead each
other.
Request exclusive access(Mutual exclusion):
At one point, only one process has access to an unreachable
resource.
Synchronization requirements:
processes that need to work together to get things done.
Two "synchronous problems" need to be solved:
the problem of "exclusive access" ("critical section problem")
the problem of "coordinated implementation".
Critical Section
The code of a process maybe occur when accessing shared resources
(variables, files, ...).
Example:
if (account >= withdrawal money) account = account - withdrawal money;
else print “cannot withdraw money! “;
The conditions required when solving the critical
section problem
1. There are no two processes in the critical section at
the same time.
2. There is no assumption about the speed of
processes, nor about the number of processors.
3. A process outside the critical section must not
prevent other processes from entering the critical
section.
4. No process must wait indefinitely to enter the
critical section
Synchronous solution groups
Busy Waiting
Sleep And Wakeup
Semaphore
Monitor
Message.
Busy Waiting
Software solutions
Algorithm using the flag variable
Algorithm using alternating variables
Peterson algorithm
Hardware solutions
No interruption
Use TSL command (Test and Set Lock)
Algorithm using flag variable (used for many
processes)
lock=0 There is no process in the critical section.
lock=1 There is a process in the critical section.
lock=0;
while (1)
{
while (lock == 1);
lock = 1;
critical-section ();.
lock = 0;
noncritical-section();
}
Violation: "Two processes can be in the critical section at a time."
Algorithm using alternating variables (used for 2
processes)
Two processes A and B use the same turn variable:
turn = 0, process A gets into the critical section
turn = 1, then B can enter the critical section
// A process turn=0, A can // B process
turn=1, B can
while (1) enter critical while (1)
enter critical
{ section {
section
while (turn == 1); while (turn == 0);
critical-section (); critical-section ();
turn = 1; turn = 0;
Noncritical-section (); Noncritical-section ();
} }
Two processes certainly cannot enter the critical section at the same time, because at
one time the turn has only one value.
Violation: a process can be prevented into the critical section by another process not
in the critical section.
Peterson algorithm (used for 2 processes)
We share two variables turn and flag [2] (type int).
flag [0] = flag [1] = FALSE
turn is started as 0 or 1.
If flag [i] = TRUE (i = 0,1) -> Pi wants to enter the critical domain and
turn = i, indicate that Pi's turn.
To inter the critical domain:
Pi sets the flag [i] = TRUE to indicate that it wants to enter the critical domain.
set turn = j to try proposing process Pj into the critical domain.
If process Pj is not interested in entering the critical domain (flag [j] =
FALSE), then Pi can enter the critical domain.
if flag [j] = TRUE then Pi has to wait until flag [j] = FALSE.
When process Pi leaves the critical domain, it resets flag [i] to FALSE.
Peterson algorithm (code)
// tiến trình P0 (i=0)
while (TRUE)
{
flag [0]= TRUE;//P0 thông báo là P0 muon vao mg
turn = 1; //thu de nghi P1 vao
while (turn == 1 && flag [1]==TRUE); //neu P1 muon vao thi P0 chờ
critical_section();
flag [0] = FALSE; //P0 ra ngoai mg
noncritical_section ();
}
// tiến trình P1 (i=1)
while (TRUE)
{
flag [1]= TRUE; //P1 thông báo là P1 muon vao mg
turn = 0;//thử de nghi P0 vao
while (turn == 0 && flag [0]==TRUE); //neu P0 muon vao thi P1 chờ
critical_section();
flag [1] = FALSE;//P1 ra ngoai mg
Noncritical_section ();
}
No interruption
The process prohibits all interrupts before entering
the critical section, and restores interrupts when
leaving the critical section.
Not safe for the system.
Does not work on systems with multiple processors.
Use TSL command (Test and Set Lock)
The TSL command allows checking and updating a
device in a preemptive operation.
boolean Test_And_Set_Lock (boolean lock)
{
boolean temp=lock;
lock = TRUE;
return temp; //trả về giá trị ban đầu của biến lock
}
boolean lock=FALSE; //biến dùng chung
while (TRUE)
{
while (Test_And_Set_Lock(lock));
Works on systems with critical_section ();
multiple processors. lock = FALSE;
noncritical_section ();
}
Solution Group: SLEEP and WAKEUP
Using the SLEEP and WAKEUP command
Using Semaphore structure
Using Monitors structure
Using Message
Using the SLEEP and WAKEUP command
SLEEP -> “Ready List", get the CPU back to another
Process.
WAKEUP -> OS selects a Process in the Ready List, for
further execution.
Process is not eligible for the critical section -> call
SLEEP to lock itself, until another Process calls
WAKEUP to free it.
A process calls WAKEUP when leaving the critical
section to wake up a pending process, creating an
opportunity for this process to enter the critical section.
Using the SLEEP and WAKEUP command
int busy=FALSE; // TRUE There is progress in the critical section and opposite is FALSE.
int blocked=0; // Count the number of processes being locked
while (TRUE)
{
if (busy)
{
blocked = blocked + 1;
sleep();
}
else busy = TRUE;
critical-section ();
busy = FALSE;
if (blocked>0)
{
wakeup(); //wake up a pending process
blocked = blocked - 1;
}
Noncritical-section ();
}
Using Semaphore structure
The semaphore s variable has the following properties:
A positive integer value e;
A queue f: the list of pending processes on the semaphore s.
Two operations on the semaphore s:
Down (s): e = e-1.
Ife <0, the process must wait in f (sleep), otherwise the process
continues.
Up (s): e = e + 1.
If e <= 0, select a process in f for further execution (wake up).
Using Semaphore structure
Down(s) P is the process of performing the Down(s) or Up(s)
{ e = e - 1; operation.
if (e < 0)
{ status(P)= blocked; // Switch P to blocked state (wait) enter(P,f); // Let P in the queue f
}
}
Up(s)
{ e = e + 1;
if (e<= 0 )
{
exit(Q,f); // Take a Q process out of the queue f
status (Q) = ready; // Move Q to ready state
enter(Q,ready-list); // Put Q on the ready-list of systems
}
}
Using Semaphore structure
The operating system needs to install Down and Up operations
to be exclusive.
The structure of the semaphore:
class semaphore
{
int e;
PCB * f; //Semaphore's own list
public:
down();
up();
};
|e| = Process number waiting on f.
Solve the problem of critical section with
Semaphores
Using a semaphore s, e is initialized to 1.
All processes apply the same program structure:
semaphore s=1; // e of semaphore s is 1
while (1)
{
Down(s);
critical-section ();
Up(s);
Noncritical-section ();
}
Solve the problem of critical section with
Semaphores
Example:
Process Operation E(s) CS F(s)
1 A Down 0 A
2 B Down -1 A B
3 C Down -2 A B, C
4 A Up -1 B C
Solving synchronous problems with Semaphores
Two processes of P1 and P2, P1 execute Job_1, P2 execute Job_2.
Job_1 execute first an then Job_2, let P1 and P2 share a semaphore s, initialize e (s)
=0:
semaphore s=0; // shared for two processes
P1:
{
If setting Down and Up is wrong or
job1(); missing, it may be wrong.
Up(s); //wake up P2
}
P2:
{
Down(s); // wait P1 to wake up
job2();
}
Problems when using semaphore
The process forgets to call Up (s), as a result, when leaving the
critical section it will not allow another process into the critical
section!
e(s)=1;
while (1)
{
Down(s);
critical-section ();
Noncritical-section ();
}
Problems when using semaphore
Using semaphore can cause congestion.
P1:
{ Two processes P1, P2 used 2 shared
down(s1); down(s2); semaphore s1=s2=1
….
up(s1); up(s2); If the order does as following:
} P1: down(s1), P2: down(s2),
P2: P1: down(s2), P2: down(s1)
{ Then s1=s2=-1,
down(s2); down(s1); So P1, P2 are waiting forever
….
up(s2); up(s1);
}
Using Monitors structure
Monitor is a special structure (class)
exclusive methods (critical-section)
variables (shared for processes)
Variables in the monitor can only be accessed by methods in
the monitor
At one point, there is only one process that is operated
inside a monitor.
Condition Variable c
used to synchronize the use of variables in the monitor.
Wait(c) and Signal(c):
Using Monitors structure
Wait(c)
{ status(P)= blocked; // Move P to the waiting state
enter(P,f(c)); // Put P on the queue f(c) of the condition variable c
}
Wait(c): switch the state of call progress to
waiting (blocked) and set this process to the
Signal(c) queue of the condition variable c.
{
if (f(c) != NULL)
{
exit(Q,f(c)); // Get the Q process waiting on c
statusQ) = ready; // Move Q to ready state
enter(Q,ready-list); // Put Q on the ready-list
} Signal(c): if there is a waiting process in the queue of c, re-
} activate that process and the calling process will leave the
monitor. If no process is waiting in the queue of c, the
Signal(c) command is ignored.
Using Monitors structure
monitor <name_monitor > // Declare monitor
shared for processes
{
<shared variables>;
<condition variables>;
<exclusive methods>;
}
//process Pi:
while (1) // structure of process i
critical-section {
Noncritical-section ();
<name_monitor>. Method_i; //execute
the exclusive job i
Noncritical-section ();
}
Using Monitors structure
The risk of execution wrong synchronization is greatly reduced.
Very few languages support monitor structure.
"Busy and waiting" solutions do not have to
do context switching while the "sleep and
wakeup" solution will take time for this.
Monitors and the 5 philosophers have
dinner Problem
3
4
Monitors and the 5 philosophers have dinner
Problem
monitor philosopher
{ enum {thinking, hungry, eating} state[5];// shared variables for philosophers
condition self[5]; // condition variables for synchronise the having dinner
// exclusive methods (critical-sections)
void init();// initiating method
void test(int i); // test condition before take philosopher i having dinner
void pickup(int i); // pickup method
void putdown(int i); // putdown method
}
Monitors and the 5 philosophers have dinner
Problem
void philosopher()// initiating method (constructor)
{ // assigning the initial state to philosophers is "thinking"
for (int i = 0; i < 5; i++) state[i] = thinking;
}
void test(int i)
{ // If philosopher_i is hungry and the philosopher on the left and right are not eating,
then give philosopher_i food
if ( (state[i] == hungry) && (state[(i + 4) % 5] != eating) &&(state[(i + 1) % 5] !=
eating))
{
self[i].signal();// wake up philosopher_i, if philosopher_i is waiting
state[i] = eating; // philosopher_i is eating
}
}
Monitors and the 5 philosophers have dinner
Problem
void pickup(int i)
{
state[i] = hungry; // philosopher_i is hungry
test(i); // Check before sending food to philosopher_i
if (state[i] != eating) self[i].wait(); // waiting resource
}
void putdown(int i)
{
state[i] = thinking; // philosopher_i is thinking
test((i+4) % 5); // check philosopher on the right, if OK take this philosopher having
dinner
test((i+1) % 5);// check philosopher on the left, if OK take this philosopher having
dinner
}
Monitors and the 5 philosophers have dinner
Problem
// Structure of the Process_Pi execute the having dinner of philosopher_i
philosopher pp; //shared monitor variable
Pi:
while (1)
{
Noncritical-section ();
pp.pickup(i); // pickup is critical-section and access exclusively
eat(); // eating
pp.putdown(i);// putdown is critical-section and access exclusively
Noncritical-section ();
}
Using Message
A process that controls using of resource and many
other processes that require resources.
while (1)
{
Send(process controler, request message); //call requesting message resource and change to
blocked
Receive(process controler, accept message); //receive accepted message using resource
critical-section (); // using shared resource exclusively
Send(process controler, end message); // call ending message using resource
Noncritical-section ();
}
In distributed systems, the message exchange mechanism
will be simpler and be used to solve the synchronization
problem.
DEADLOCKS
A set of processes is called deadlocks if each
process in the set is waiting for the resource that
another process in the set is occupying.
Conditions appear Deadlocks
Condition 1: Using resources that cannot be
shared.
Condition 2: Occupying and requesting resources
that cannot be shared.
Condition 3: Do not retrieve resources from the
process that is keeping them.
Condition 4: There exists a cycle in the resource
allocation graph.
Resource allocation graph
Process C occupies U,
Process A Process B requests T, process D
occupies requests occupies T, requests U.
resources R resource S Set of process {C,D} is
deadlock
Example about Deadlocks
In case of deadlock?
Methods for handling and preventing deadlocks
Using a resource allocation algorithm -> never
happen deadlocks.
Allow for deadlock -> find a way to fix deadlocks.
Ignoring deadlocks handling, it seems that the
system never happens to be congested.
Prevent Deadlocks
Condition 1 is almost inevitable.
To condition 2 does not occur:
The process must require all necessary resources before executing begins.
When the process requires a new resource and is denied
freeingoccupied resources
The old resource will be allocated again with the new resource.
To condition 3 does not occur:
retrieve resources from blocked processes and allocate them back to the
process when it gets out of blocked status.
To condition 4 does not occur:
When the process is occupying resources Ri, only request resources Rj if
F(Rj)> F(Ri).
Resource allocation algorithm to avoid deadlocks
Algorithm to determine the safe state
Banker algorithm
Algorithm to determine the safe state
int NumResources; // number of resources
int NumProcs; // number of processes in the system
int Available[NumResources] // the matrix of number of free resources
int Max[NumProcs, NumResources];
//Max[p,r]= Maximum demand of process p on resource r
int Allocation[NumProcs, NumResources];
//Allocation[p,r] = The number of resource r allocated for the process p
int Need[NumProcs, NumResources];
//Need[p,r] = Max[p,r] - Allocation[p,r]= the number of resource r which p process still needs to use
int Finish[NumProcs] = false;
//Finish[p]=true; the process p has completed execution;
Algorithm to determine the safe state
ST1.
If i:
Finish[i]= false // Process i have not finished executing
Need[i,j] <= Available[j], j // All needs about resources of process i can meet
Do ST2 else go to ST3
ST2. Allocate all the need_resources of process i
Allocation[i,j]= Allocation[i,j]+Need[i,j]; j Allocate sufficient resources for the
need[i,j]=0 ; j process i
Available[j]= Available[j] - Need[i,j];
Mark the progress i “finish”
Finish[i] = true; Update the number of available
resources j
Available[j]= Available[j] + Allocation[i,j];
goto ST1
ST3. i If Finish[i] = true -> « The system is in a safe state », else « not safe »
Banker Algorithm
Pi requires kr instance of resource r
ST1. If kr <= Need[i,r] r, goto ST2,
else « error » Banker Algorithm: When a process
requires resources, the OS tries to
ST2. If kr <= Available[r] r, goto ST3, allocate, then determines whether
else Pi « wait » the state of system is SAFE. If the
system is safe, then actually
ST3. r: allocate the resources that the
process requires, whereas the
Available[r]=Available[r]-kr;
process must wait.
Allocation[i,r] =Allocation[i,r]+ kr;
Need[i,r] = Need[i,r] – kr;
ST4: Check the safe state of the system (using Algorithm to determine the safe state).
Example - resource allocation to avoid deadlocks
Max Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 2 2 1 0 0 4 1 2
P2 6 1 3 2 1 1
P3 3 1 4 2 1 1
P4 4 2 2 0 0 2
If the process P2 require 4 R1, 1 R3. Please indicate
whether this request can be met without deadlock?
Example - resource allocation to avoid deadlocks
ST0: Calculate “Need”, which is the remaining need for each resource j
of each process i:
Need[i,j]=Max[i,j]-Allocation[i,j]
Need Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 2 2 2 1 0 0 4 1 2
P2 4 0 2 2 1 1
P3 1 0 3 2 1 1
P4 4 2 0 0 0 2
Example - resource allocation to avoid deadlocks
ST1+ST2: resource requirements of P2 meet the
conditions of ST1, ST2.
ST3: Try allocating for P2, updating the system status
Need Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 2 2 2 1 0 0 0 1 1
P2 0 0 1 6 1 2
P3 1 0 3 2 1 1
P4 4 2 0 0 0 2
Example - resource allocation to avoid deadlocks
ST4: Check the safe state of the system.
In turn, choose the process to try to allocate:
Choose P2, try allocating, suppose P2 is finished then retrieve
Available[j]= Available[j] + Allocation[i,j];
Need Allocation Available Using the
Algorithm to
R1 R2 R3 R1 R2 R3 R1 R2 R3 determine the
P1 2 2 2 1 0 0 6 2 3 safe state
P2 0 0 0 0 0 0
P3 1 0 3 2 1 1
P4 4 2 0 0 0 2
Example - resource allocation to avoid deadlocks
+ Choose P1
Need Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 0 0 0 0 0 0 7 2 3
P2 0 0 0 0 0 0
P3 1 0 3 2 1 1
P4 4 2 0 0 0 2
Example - resource allocation to avoid deadlocks
+ Choose P3:
Need Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 0 0 0 0 0 0 9 3 4
P2 0 0 0 0 0 0
P3 0 0 0 0 0 0
P4 4 2 0 0 0 2
Example - resource allocation to avoid deadlocks
+ Choose P4:
Need Allocation Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 0 0 0 0 0 0 9 3 6
P2 0 0 0 0 0 0
P3 0 0 0 0 0 0
P4 0 0 0 0 0 0
All processes have been allocated resources with the highest
requirements, so the state of the system is safe, so it is possible
to allocate resources required by P2 without deadlock.
Deadlocks detection algorithm
For Resources with only one instance
For Resources with many instances
For Resources with only one instance
Use the resources waiting graph (wait-for graph)
built from the resource allocation graph (resource-
allocation graph) by removing the vertices representing
the resource type.
edge from Pi to Pj: Pi is waiting for Pj to release a
resource that Pi needs.
The system is deadlock if and only if the resources
waiting graph have cycle.
Example
Process Request Occupy
P1 R1 R2
P2 R3, R4 R1
P3 R5 R4
P4 R2 R5
P5 R3
P5
P5 P5
R1 R3 R4
R1 R3 R4 P1 P2 P3
P1 P2 P3
P1 P2 P3
P4
R2 R5 R2 P4 R5
P4
Resources waiting
Resource-allocation graph Resource allocation test graph
graph on demand
For Resources with many instances
Step 1: Select the first Pi so that the resource
requirement can be met,
If not, the system is deadlock.
Step 2: Try allocating resources for Pi and check the
system status,
If the system is safe, go to Step 3,
otherwise, turn to Step 1 for the next Pi.
Step 3: Allocate resources for Pi. If all Pi are met, the
system is not deadlock, otherwise go back to Step 1.
Deadlock correction
Cancel process in a deadlock state
Cancel until there is no more deadlock-causing cycle
Based on factors such as priority, processing time, number of
resources occupied, number of resources required ...
Retrieve resources
Select a victim: which process will have resources retrieved? and
which resources are retrieved?
Back to the previous state of deadlock (rollback): when retrieving a
process's resources, it is necessary to restore the state of the process
back to the previous state without deadlock.
Status of "resource hunger": how to ensure that no process is
always retrieved resources?