Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
15 views30 pages

Processor Management System (MSC)

The document discusses the Processor Management System, detailing the concept of processes, their attributes, and the life cycle of processes from creation to termination. It explains various operations on processes, scheduling policies, and algorithms such as First-Come, First-Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it covers inter-process communication methods, including mutual exclusion, semaphores, barriers, and shared memory.

Uploaded by

sutarpayal2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views30 pages

Processor Management System (MSC)

The document discusses the Processor Management System, detailing the concept of processes, their attributes, and the life cycle of processes from creation to termination. It explains various operations on processes, scheduling policies, and algorithms such as First-Come, First-Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it covers inter-process communication methods, including mutual exclusion, semaphores, barriers, and shared memory.

Uploaded by

sutarpayal2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Msc-Cs Chapter-2

Processor Management System

Processor Management System:


Process:
A Program does nothing unless its instructions are executed by a CPU. A program in execution
is called a process. In order to accomplish its task, process needs the computer resources.

There may exist more than one process in the system which may require the same resource at
the same time. Therefore, the operating system has to manage all the processes and the
resources in a convenient and efficient way.

Some resources may need to be executed by one process at one time to maintain the consistency
otherwise the system can become inconsistent and deadlock may occur.

The operating system is responsible for the following activities in connection with Process
Management.

Process Attributes:

The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which are
stored in the PCB are described below.

1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.

2. Program counter
A program counter stores the address of the last instruction of the process on which the process
was suspended. The CPU uses this address when the execution of this process is resumed.

3. Process State
The Process, from its creation to the completion, goes through various states which are new,
ready, running and waiting.

4. Priority
Every process has its own priority. The process with the highest priority among the processes
gets the CPU first. This is also stored on the process control block.
5. General Purpose Registers

Every process has its own set of registers which are used to hold the data which is generated
during the execution of the process.

6. List of open files

During the Execution, Every process uses some files which need to be present in the main
memory. OS also maintains a list of open files in the PCB.

7. List of open devices

OS also maintain the list of all open devices which are used during the execution of the process.

Life Cycle of Process:


he process, from its creation to completion, passes through various states. The minimum
number of states is five.

The names of the states are not standardized although the process may be in one of the
following states during execution.

1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.

2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the
CPU to be assigned. The OS picks the new processes from the secondary memory and put all
of them in the main memory.

The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.

3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one. If we have n processors in the system then
we can have n processes running simultaneously.

4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other processes.

5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.

Operations on the Process

1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and
will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it. Process may
come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.

4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.

Process Scheduling:
After a job has been placed on the READY queue by the Job Scheduler, the
Process Scheduler
takes over it.
• The Process Scheduler is the low-level scheduler that assigns the CPU to
execute the processes of
those jobs placed on the READY queue by the Job Scheduler.
• It determines which jobs will get the CPU, when, and for how long. It also
decides when processing
should be interrupted, determines which queues the job should be moved to
during its execution,
and recognizes when a job has concluded and should be terminated.
• To schedule the CPU, the Process Scheduler alternate between CPU cycles
and I/O cycles.
middle-level scheduler.
• When the system is over-loaded, the middle-level scheduler finds it is
advantageous to remove
active jobs from memory to reduce the degree of multiprogramming, which
allows jobs to be
completed faster.
• The jobs that are swapped out and eventually swapped back in are managed by
the middle-level
scheduler.
• In a single-user environment, there’s no distinction made between job and
process scheduling
because only one job is active in the system at any given time.
PROCESS SCHEDULING POLICIES
In a multiprogramming environment, there are usually more jobs to be executed
than could possibly be
run at one time. Before the operating system can schedule them, it needs to
resolve three limitations of
the system:
1. There are a finite number of resources (such as disk drives, printers, and tape
drives)
2. Some resources, once they’re allocated, can’t be shared with another job
(e.g., printers)
3. Some resources require operator intervention i.e., they can’t be reassigned
automatically from
job to job (such as tape drives).
Several criteria to be considered while scheduling processes are
1. Maximize throughput: Run as many jobs as possible in a given amount of
time.
2. Minimize response time. Quickly turn around interactive requests. This could
be done by
running only interactive jobs and letting the batch jobs wait until the inter-active
load ceases.
3. Minimize turnaround time. Move entire jobs in and out of the system quickly.
4. Minimize waiting time. Move jobs out of the READY queue as quickly as
possible. This could
only be done by reducing the number of users allowed on the system so the
CPU would be
available immediately whenever a job entered the READY queue.
5. Maximize CPU efficiency. Keep the CPU busy 100 percent of the time. This
could be done by
running only CPU-bound jobs (and not I/O-bound jobs).
6. Ensure fairness for all jobs. Give everyone an equal amount of CPU and I/O
time.
VARIOUS TIMINGS RELATED WITH PROCESS
Arrival Time- Arrival time is the point of time at which a process enters the
ready queue.
Waiting Time- Waiting time is the amount of time spent by a process waiting in
the ready queue for
getting the CPU.
Waiting time = Completion time (or) Turn Around Time – Burst time
Response Time - Response time is the amount of time after which a process
gets the CPU for the first
time after entering the ready queue.
Response Time = Time at which process first gets the CPU – Arrival time
Burst Time
• Burst time is the amount of time required by a process for executing on CPU.
• It is also called as execution time or running time.
• Burst time of a process can not be known in advance before executing the
process.
• It can be known only after the process has executed.
Completion Time
• Completion time is the point of time at which a process completes its
execution on the CPU and
takes exit from the system.
• It is also called as exit time.
Turn Around Time
• Turn Around time is the total amount of time spent by a process in the system.
• When present in the system, a process is either waiting in the ready queue for
getting the CPU or
it is executing on the CPU.
Turn Around time = Burst time + Waiting time (or)
Turn Around time = Completion time – Arrival time
PROCESS SCHEDULING ALGORITHMS
The Process Scheduler relies on a process scheduling algorithm, based on a
specific policy, to allocate
the CPU and move jobs through the system. Generally scheduling algorithms
are classified as
Preemptive and Non-Preemptive algorithms.
• A scheduling strategy that interrupts the processing of a job and transfers the
CPU to another job is
called a preemptive scheduling policy. It is widely used in time-sharing
environments.
• In non-preemptive scheduling policy once a job captures the processor and
begins execution, it
remains in the RUNNING state uninterrupted until it issues an I/O request
(natural wait) or until it is
finished. Early operating systems used non-preemptive policies designed to
move batch jobs through the system
as efficiently as possible and most current systems emphasis on interactive use
and response time.
FIRST-COME, FIRST-SERVED
• First-come, first-served (FCFS) is a non-preemptive scheduling algorithm
• It handles jobs according to their arrival time: the earlier they arrive, the
sooner they’re served.
• It’s a very simple algorithm to implement because it uses a FIFO queue.
• This algorithm is fine for most batch systems, but it is unacceptable for
interactive systems
because interactive users expect quick response times.
• With FCFS, as a new job enters the system its PCB is linked to the end of the
READY queue and
it is removed from the front of the queue when the processor becomes available
Consider the processes P1, P2, P3, P4 given in the below table, arrives for
execution in the same order,
with Arrival Time 0, and given Burst Time, let's find the average waiting time
using the FCFS scheduling algorithm.
Calculating Average Waiting Time
AWT or Average waiting time is the average of the waiting times of the
processes in the queue, waiting
for the scheduler to pick them for execution. Lower the Average Waiting Time,
better the scheduling
algorithm.
Problems with FCFS Scheduling:
1. It is Non Pre-emptive algorithm, which means the process priority doesn't
matter.
2. Not optimal Average Waiting Time.
3. Resources utilization in parallel is not possible, which leads to Convoy
Effect, and hence poor
resource (CPU, I/O etc) utilization.
SHORTEST JOB FIRST (SJF) SCHEDULING
Shortest Job First scheduling works on the process with the shortest burst time
or duration first.
• This is the best approach to minimize waiting time.
• This is used in Batch Systems
• It is of two types:
1. Non Pre-emptive
2. Pre-emptive
• To successfully implement it, the burst time/duration time of the processes
should be known to
the processor in advance, which is practically not feasible all the time.
• This scheduling algorithm is optimal if all the jobs/processes are available at
the same time.
(either Arrival time is 0 for all, or Arrival time is same for all)
Problem with Non Pre-emptive SJF
If the arrival times for processes are different, it leads to the problem of
Starvation, where a shorter
process has to wait for a long time until the current longer process gets
executed, but this can be solved
using the concept of aging.
Consider the processes P1, P2, P3, P4 given in the below table, arrives for
execution in the same order,
with Arrival Time 0, and given Burst Time, let's find the average waiting time
using the SJF scheduling algorithm.

PRIORITY SCHEDULING
• Priority scheduling is a non-preemptive algorithm
• Most common scheduling algorithms in batch systems, even though it may
give slower turnaround
to some users
• It allows the programs with the highest priority to be processed first, and they
aren’t interrupted
until their CPU cycles (run times) are completed or a natural wait occurs.
• If two or more jobs with equal priority are present in the READY queue, the
processor is allocated
to the one that arrived first.
• With a priority algorithm, jobs are usually linked to one of several READY
queues by the Job
Scheduler based on their priority so the Process Scheduler manages multiple
READY queues
instead of just one.
• Priorities can also be determined by the Processor Manager based on
characteristics intrinsic to the
jobs such as Memory requirements, Number and type of peripheral devices,
Total CPU time
and Amount of time already spent in the system.
Problem with Priority Scheduling Algorithm
The chances of indefinite blocking or starvation exist. If new higher priority
processes keeps coming
in the ready queue then the processes waiting in the ready queue with lower
priority may have to wait
for long durations before getting the CPU for execution.
ROUND ROBIN SCHEDULING:
• Round robin is a preemptive process scheduling algorithm
• It is used extensively in interactive systems.
• It’s easy to implement
• It is not based on job characteristics but on a predetermined slice of time that’s
given to each job
to ensure that the CPU is equally shared among all active processes
• A fixed time is allotted to each process, called quantum, for execution.
• Once a process is executed for given time period that process is preempted and
other process
executes for given time period.
• Context switching is used to save states of preempted processes.

Way of Execution of Processes:


1. Jobs are placed in the READY queue using a first-come, first-served scheme
2. Process Scheduler selects the first job from the front of the queue, sets the
timer to the time
quantum, and allocates the CPU to this job.
14
3. If processing isn’t finished when time expires, the job is preempted and put at
the end of the
READY queue and its information is saved in its PCB.
4. In the event that the job’s CPU cycle is shorter than the time quantum

InterProcess Communication:
"Inter-process communication is used for exchanging useful information between numerous threads in
one or more processes (or programs)."

Role of Synchronization in Inter Process Communication


It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore
2. Counting Semaphore

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically,
it uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be


considered as full-duplex, which means that one process can communicate with another process
and vice versa.

MultiThreading Model:

Multithreading allows the application to divide its task into individual threads. In multi-threads,
the same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.

The main drawback of single threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.

In an operating system, threads are divided into the user-level thread and the Kernel-level thread. User-
level threads handled independent form above the kernel and thereby managed without any kernel
support. On the opposite hand, the operating system directly manages the kernel-level threads.
Nevertheless, there must be a form of relationship between user-level and kernel-level threads.

o Many to one multithreading model


o One to one multithreading model
o Many to Many multithreading models

Many to one multithreading model:


The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented even
on the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread schedule at
any given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi-processor systems. In this, all the thread management is done
in the userspace. If blocking comes, this model blocks the whole system.

In the above figure, the many to one model associates all user-level threads to single kernel-
level threads.

One to one multithreading model


The one-to-one model maps a single user-level thread to a single kernel-level thread. This type
of relationship facilitates the running of multiple threads in parallel. However, this benefit
comes with its drawback. The generation of every new user thread must include creating a
corresponding kernel thread causing an overhead, which can hinder the performance of the
parent process. Windows series and Linux operating systems try to tackle this problem by
limiting the growth of the thread count.

In the above figure, one model associates that one user-level thread to a single kernel-level
thread.

Many to Many Model multithreading model


In this type of model, there are several user-level threads and several kernel-level threads. The
number of kernel threads created depends upon a particular application. The developer can
create as many threads at both levels but may not be the same. The many to many model is a
compromise between the other two models. In this model, if any thread makes a blocking
system call, the kernel can schedule another thread for execution. Also, with the introduction
of multiple threads, complexity is not present as in the previous models. Though this model
allows the creation of multiple kernel threads, true concurrency cannot be achieved by this
model. This is because the kernel can schedule only one process at a time.

Many to many versions of the multithreading model associate several user-level threads to the
same or much less variety of kernel-level threads in the above figure

Threading issues:

The fork() and exec() system calls:

The fork() is used to create a duplicate process. The meaning of the fork() and exec() system calls
change in a multithreaded program.

If one thread in a program which calls fork(), does the new process duplicate all threads, or is the
new process single-threaded? If we take, some UNIX systems have chosen to have two versions of
fork(), one that duplicates all threads and another that duplicates only the thread that invoked the
fork() system call.

If a thread calls the exec() system call, the program specified in the parameter to exec() will replace
the entire process which includes all threads.

Signal Handling

Generally, signal is used in UNIX systems to notify a process that a particular event has occurred.
A signal received either synchronously or asynchronously, based on the source of and the reason
for the event being signalled.

Cancellation

Thread cancellation is the task of terminating a thread before it has completed.


For example − If multiple database threads are concurrently searching through a database and one
thread returns the result the remaining threads might be cancelled.

Thread polls

Multithreading in a web server, whenever the server receives a request it creates a separate thread
to service the request. A thread pool is to create a number of threads at process start-up and place
them into a pool, where they sit and wait for work.

Thread Sheduling:

A component of Java that decides which thread to run or execute and which thread to wait is
called a thread scheduler in Java. In Java, a thread is only chosen by a thread scheduler if it
is in the runnable state. However, if there is more than one thread in the runnable state, it is up
to the thread scheduler to pick one of the threads and ignore the other ones. There are some
criteria that decide which thread will execute first. There are two factors for scheduling a thread
i.e. Priority and Time of arrival.

Priority: Priority of each thread lies between 1 to 10. If a thread has a higher priority, it means
that thread has got a better chance of getting picked up by the thread scheduler.

Time of Arrival: Suppose two threads of the same priority enter the runnable state, then
priority cannot be the factor to pick a thread from these two threads. In such a case, arrival
time of thread is considered by the thread scheduler. A thread that arrived first gets the
preference over the other threads.

Thread Scheduler Algorithms


First Come First Serve Scheduling:
In this scheduling algorithm, the scheduler picks the threads thar arrive first in the runnable
queue. Observe the following table:

Threads Time of Arrival

t1 0

t2 1

t3 2

t4 3
In the above table, we can see that Thread t1 has arrived first, then Thread t2, then t3, and at
last t4, and the order in which the threads will be processed is according to the time of arrival
of threads.

Hence, Thread t1 will be processed first, and Thread t4 will be processed last.

Time-slicing scheduling:
Usually, the First Come First Serve algorithm is non-preemptive, which is bad as it may lead
to infinite blocking (also known as starvation). To avoid that, some time-slices are provided to
the threads so that after some time, the running thread has to give up the CPU. Thus, the other
waiting threads also get time to run their job.

In the above diagram, each thread is given a time slice of 2 seconds. Thus, after 2 seconds, the
first thread leaves the CPU, and the CPU is then captured by Thread2. The same process repeats
for the other threads too.

Preemptive-Priority Scheduling:
The name of the scheduling algorithm denotes that the algorithm is related to the priority of the
threads.
Suppose there are multiple threads available in the runnable state. The thread scheduler picks
that thread that has the highest priority. Since the algorithm is also preemptive, therefore, time
slices are also provided to the threads to avoid starvation. Thus, after some time, even if the
highest priority thread has not completed its job, it has to release the CPU because of
preemption.

Working of the Java Thread Scheduler

Let's understand the working of the Java thread scheduler. Suppose, there are five threads that
have different arrival times and different priorities. Now, it is the responsibility of the thread
scheduler to decide which thread will get the CPU first.

The thread scheduler selects the thread that has the highest priority, and the thread begins the
execution of the job. If a thread is already in runnable state and another thread (that has higher
priority) reaches in the runnable state, then the current thread is pre-empted from the processor,
and the arrived thread with higher priority gets the CPU time.

When two threads (Thread 2 and Thread 3) having the same priorities and arrival time, the
scheduling will be decided on the basis of FCFS algorithm. Thus, the thread that arrives first
gets the opportunity to execute first.

Process Co-ordination:

Synchronization:
When two or more process cooperates with each other, their order of execution must be
preserved otherwise there can be conflicts in their execution and inappropriate outputs can be
produced.

A cooperative process is the one which can affect the execution of other process or can be
affected by the execution of other process. Such processes need to be synchronized so that their
order of execution can be guaranteed.

The procedure involved in preserving the appropriate order of execution of cooperative


processes is known as Process Synchronization. There are various synchronization
mechanisms that are used to synchronize the processes.

Race Condition

A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.

Critical Section

The regions of a program that try to access shared resources and may cause race conditions are
called critical section. To avoid race condition among the processes, we need to assure that
only one process at a time can execute within the critical section.

1.Critical Section Problem:

Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO
device.

The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.

The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section
problem. We need to provide a solution in such a way that the following conditions can be
satisfied.

Requirements of Synchronization mechanisms

Primary:
1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one
process is executing inside critical section then the other process must not enter in the
critical section.

2. Progress

Progress means that if one process doesn't need to execute into critical section then it
should not stop other processes to get into the critical section.

Secondary:

1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical
section. The process must not be endlessly waiting for getting into the critical section.

2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is working
fine on one architecture then it should also run on the other ones as well.
Semaphores:
Semaphores are integer variables that are used to solve the critical section problem by
using two atomic

operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then

no operation is performed.

wait(S)

while (S<=0);

S- -;

Signal
The signal operation increments the value of its argument S.

signal(S)

S++;

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about

Counting Semaphores

These are integer value semaphores and have an unrestricted value domain. These semaphores
are used

to coordinate the resource access, where the semaphore count is the number of available
resources. If the

resources are added, semaphore count automatically incremented and if the resources are
removed, the

count is decremented.

Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted to 0 and 1.
The wait

operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0.

It is sometimes easier to implement binary semaphores than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows:

• Semaphores allow only one process into the critical section. They follow the mutual exclusion

principle strictly and are much more efficient than some other methods of synchronization.

• There is no resource wastage because of busy waiting in semaphores as processor time is not

wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical

section.

• Semaphores are implemented in the machine independent code of the microkernel. So they
are

machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −


• Semaphores are complicated so the wait and signal operations must be implemented in the

correct order to prevent deadlocks.

• Semaphores are impractical for last scale use as their use leads to loss of modularity. This

happens because the wait and signal operations prevent the creation of a structured layout for
the

system.

• Semaphores may lead to a priority inversion where low priority processes may access the
critical

section first and high priority processes later.

Monitors:
It is a synchronization technique that enables threads to mutual exclusion and the wait() for a
given condition to become true. It is an abstract data type. It has a shared variable and a
collection of procedures executing on the shared variable. A process may not directly access
the shared data variables, and procedures are required to allow several processes to access the
shared data variables simultaneously.

At any particular time, only one process may be active in a monitor. Other processes that
require access to the shared variables must queue and are only granted access after the previous
process releases the shared variables.

Syntax:
The syntax of the monitor may be used as:

monitor {

//shared variable declarations


data variables;
Procedure P1() { ... }
Procedure P2() { ... }
.
.
.
Procedure Pn() { ... }
Initialization Code() { ... }
}

Advantages

1. Mutual exclusion is automatic in monitors.


2. Monitors are less difficult to implement than semaphores.
3. Monitors may overcome the timing errors that occur when semaphores are used.
4. Monitors are a collection of procedures and condition variables that are combined in a
special type of module.

Disadvantages

1. Monitors must be implemented into the programming language.


2. The compiler should generate code for them.
3. It gives the compiler the additional burden of knowing what operating system features
is available for controlling access to crucial sections in concurrent processes.

DeadLock Characterization:
DeadLock:

Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.
conditions for Deadlocks: (DeadLock Characterization)
1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
Method For Handling Deadlocks:
1. Deadlock Prevention
2. Deadlock avoidance (Banker's Algorithm)
3. Deadlock detection & recovery
4. Deadlock Ignorance (Ostrich Method)
These are explained below.
1. Deadlock Prevention: The strategy of deadlock prevention is to design the
system in such a way that the possibility of deadlock is excluded. The indirect
methods prevent the occurrence of one of three necessary conditions of deadlock
i.e., mutual exclusion, no pre-emption, and hold and wait. The direct method
prevents the occurrence of circular wait. Prevention techniques – Mutual
exclusion – are supported by the OS. Hold and Wait – the condition can be
prevented by requiring that a process requests all its required resources at one time
and blocking the process until all of its requests can be granted at the same time
simultaneously. But this prevention does not yield good results because:
• long waiting time required
• inefficient use of allocated resource
• A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’
• If a process that is holding some resource, requests another resource that
can not be immediately allocated to it, all resources currently being held
are released and if necessary, request again together with the additional
resource.
• If a process requests a resource that is currently held by another process,
the OS may pre-empt the second process and require it to release its
resources. This works only if both processes do not have the same priority.
. Deadlock Avoidance: The deadlock avoidance Algorithm works by proactively
looking for potential deadlock situations before they occur. It does this by tracking
the resource usage of each process and identifying conflicts that could potentially
lead to a deadlock. If a potential deadlock is identified, the algorithm will take steps
to resolve the conflict, such as rolling back one of the processes or pre-emptively
allocating resources to other processes. The Deadlock Avoidance Algorithm is
designed to minimize the chances of a deadlock occurring, although it cannot
guarantee that a deadlock will never occur. This approach allows the three necessary
conditions of deadlock but makes judicious choices to assure that the deadlock point
is never reached. It allows more concurrency than avoidance detection A decision is
made dynamically whether the current resource allocation request will, if granted,
potentially lead to deadlock. It requires knowledge of future process requests. Two
techniques to avoid deadlock :
1. Process initiation denial
2. Resource allocation denial
Advantages of deadlock avoidance techniques:
• Not necessary to pre-empt and rollback processes
• Less restrictive than deadlock prevention
Disadvantages :
• Future resource requirements must be known in advance
• Processes can be blocked for long periods
• Exists a fixed number of resources for allocation
Banker’s Algorithm:
The Banker’s Algorithm is based on the concept of resource allocation graphs. A
resource allocation graph is a directed graph where each node represents a process,
and each edge represents a resource. The state of the system is represented by the
current allocation of resources between processes. For example, if the system has
three processes, each of which is using two resources, the resource allocation graph
would look like this:
Processes A, B, and C would be the nodes, and the resources they are using would
be the edges connecting them. The Banker’s Algorithm works by analyzing the state
of the system and determining if it is in a safe state or at risk of entering a deadlock.
To determine if a system is in a safe state, the Banker’s Algorithm uses two
matrices: the available matrix and the need matrix. The available matrix contains the
amount of each resource currently available. The need matrix contains the amount of
each resource required by each process.
The Banker’s Algorithm then checks to see if a process can be completed without
overloading the system. It does this by subtracting the amount of each resource used
by the process from the available matrix and adding it to the need matrix. If the
result is in a safe state, the process is allowed to proceed, otherwise, it is blocked
until more resources become available.
The Banker’s Algorithm is an effective way to prevent deadlocks in
multiprogramming systems. It is used in many operating systems, including
Windows and Linux. In addition, it is used in many other types of systems, such as
manufacturing systems and banking systems.
The Banker’s Algorithm is a powerful tool for resource allocation problems, but it is
not foolproof. It can be fooled by processes that consume more resources than they
need, or by processes that produce more resources than they need. Also, it can be
fooled by processes that consume resources in an unpredictable manner. To prevent
these types of problems, it is important to carefully monitor the system to ensure that
it is in a safe state.
3. Deadlock Detection: Deadlock detection is used by employing an algorithm that
tracks the circular waiting and kills one or more processes so that the deadlock is
removed. The system state is examined periodically to determine if a set of
processes is deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.
• This technique does not limit resource access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
4. Deadlock Ignorance: In the Deadlock ignorance method the OS acts like the
deadlock never occurs and completely ignores it even if the deadlock occurs. This
method only applies if the deadlock occurs very rarely. The algorithm is very
simple. It says ” if the deadlock occurs, simply reboot the system and act like the
deadlock never occurred.” That’s why the algorithm is called the Ostrich
Algorithm.
Advantages:
• Ostrich Algorithm is relatively easy to implement and is effective in most
cases.
• It helps in avoiding the deadlock situation by ignoring the presence of
deadlocks.
Disadvantages:
• Ostrich Algorithm does not provide any information about the deadlock
situation.
• It can lead to reduced performance of the system as the system may be
blocked for a long time.
• It can lead to a resource leak, as resources are not released when the
system is blocked due to deadlock.

Recovery From Deadlock:


“Recovery from Deadlock in Operating Systems” refers to the set of techniques and
algorithms designed to detect, resolve, or mitigate deadlock situations. These
methods ensure that the system can continue processing tasks efficiently without
being trapped in an eternal stand.

Approaches To Breaking a Deadlock


Process Termination
To eliminate the deadlock, we can simply kill one or more processes. For this, we use
two methods:
1. Abort all the Deadlocked Processes: Aborting all the processes will
certainly break the deadlock but at a great expense. The deadlocked
processes may have been computed for a long time, and the result of those
partial computations must be discarded and there is a probability of
recalculating them later.

2. Abort one process at a time until the deadlock is eliminated: Abort


one deadlocked process at a time, until the deadlock cycle is eliminated
from the system. Due to this method, there may be considerable overhead,
because, after aborting each process, we have to run a deadlock detection
algorithm to check whether any processes are still deadlocked.
Advantages of Process Termination
• It is a simple method for breaking a deadlock.
• It ensures that the deadlock will be resolved quickly, as all processes
involved in the deadlock are terminated simultaneously.
• It frees up resources that were being used by the deadlocked processes,
making those resources available for other processes.
Disadvantages of Process Termination
• It can result in the loss of data and other resources that were being used by
the terminated processes.
• It may cause further problems in the system if the terminated processes
were critical to the system’s operation.
• It may result in a waste of resources, as the terminated processes may have
already completed a significant amount of work before being terminated.

You might also like