Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views59 pages

OS Unit II Process Management

The document provides an overview of process management in operating systems, detailing the concept of processes, their states, and the role of the Process Control Block (PCB). It also discusses various scheduling types and algorithms, as well as the differences between processes and threads, highlighting the benefits of multithreading. Additionally, it covers multithreading models and implicit threading techniques for efficient CPU utilization.

Uploaded by

aasalunke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views59 pages

OS Unit II Process Management

The document provides an overview of process management in operating systems, detailing the concept of processes, their states, and the role of the Process Control Block (PCB). It also discusses various scheduling types and algorithms, as well as the differences between processes and threads, highlighting the benefits of multithreading. Additionally, it covers multithreading models and implicit threading techniques for efficient CPU utilization.

Uploaded by

aasalunke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

Artificial Intelligence And Data

Science
Operating System
by
Prof. A.A.Salunke
M.Tech. ACDS

Course Code 217521


SEM III
Credit - 03
Unit II
Process Management
06 Hrs.
In this chapter, you will Learn about what processes are
and how they work.
• Process concept
• Process Control Block(PCB)
• Process Operations
• Process Scheduling:
– Types of process schedulers.
– Types of scheduling: Preemptive, Non preemptive.
– Scheduling algorithms:
• FCFS.
• SJF.
• RR.
• Priority.
• Inter process Communication(IPC).
• Threads:
• Multithreaded model.
• implicit threads.
• threading issues.
Process concept
• A process is a program in execution.

• A process is more than the program code, which is sometimes known as the
Text section.

• It also includes the current activity, as represented by the value of the


program counter and the contents of the processor’s registers.

• A process generally also includes the process stack, which contains


temporary data (such as function parameters, return addresses, and local
variables), and a data section, which contains global variables.

• A process may also include a heap , which is memory that is dynamically


allocated during process run time.
such as function parameters, return
Temporary data addresses, and local variables

Heap which is memory that is dynamically allocated


during process run time

Data section Which contains global variables

program code

The structure of a process in memory


We emphasize that a program by itself is not a process.
A program is a passive entity, such as a file containing a list of instructions stored
on disk (often called an executable file).

In contrast, A process is an active entity, with a program counter specifying the


next instruction to execute and a set of associated resources.

A program becomes a process when an executable file is loaded into memory.


Process State
As a process executes, it changes state.
The state of a process is defined in part by the current activity of that process.
A process may be in one of the following states:

• New. The process is being created.


• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.
Process Control Block (PCB)
Information associated with each process
Each process is represented in the operating system by a process control
block (PCB)—also called a task control block.
Process control Block is a data structure used in operating system to store all data
related information to the process.
Process control block contains many pieces of information associated with a specific
process, including these:
Process state. The state may be new, ready, running, waiting, halted, and so on.

Program counter. The counter indicates the address of the next instruction to be executed
for this process.

CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information.

Along with the program counter, this state information must be saved when an interrupt
occurs, to allow the process to be continued correctly afterward
CPU-scheduling information : This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.

Memory-management information : This information may include such items as


the value of the base and limit registers and the page tables, or the segment
tables, depending on the memory system used by the operating system

PCB simply serves as the repository for any information that may vary from process to
process.
Process Control Block (PCB)

Information associated with each process


(also called task control block)
• Process state – running, waiting etc.
• Program counter – location of instruction to next
execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling
queue pointers
• Memory-management information – memory allocated
to the process
• Accounting information – CPU used, clock time elapsed
since start, time limits
• I/O status information – I/O devices allocated to
process, list of open files
Context switching
• When you run multiple software applications on your
operating system, it’s important to ensure that all processes run
smoothly without blocking each other. Therefore, you need to
allocate CPU time to each process. This is where context
switching helps.
• Context switching is a technique the operating system uses to
switch a process from one state to another to execute its
function using the system’s CPU. When a switch occurs, the
system stores the status of the old running process in registers
and assigns the CPU to a new process to complete its tasks.
Context switching
• Initially, Process P1 is executing on the CPU, while
Process P2 remains idle. When an interrupt or system
call occurs, the CPU saves the current state of P1,
including the program counter and register values, into
PCB1.
• Once P1’s context is saved, the CPU reloads the state
of P2 from PCB2, transitioning P2 to the executing
state. Meanwhile, P1 moves to the idle state. This
process repeats when another interrupt or system call
happens, ensuring smooth switching between the two
processes.
Types of Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU
– Sometimes the only scheduler in a system
– Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
• Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
– Long-term scheduler is invoked infrequently (seconds, minutes) ⇒
(may be slow)
– The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
– CPU-bound process – spends more time doing computations; few
very long CPU bursts
• Long-term scheduler strives for good process mix
Types of scheduling: Preemptive, Non preemptive
Preemptive Scheduling

In preemptive scheduling,
The CPU will execute a process but for a limited period of time and after that,
the process has to wait for its next turn i.e. in preemptive scheduling,
The state of a process gets changed i.e. the process may go to the ready state
from running state or from the waiting state to the ready state.
The resources are allocated to the process for a limited amount of time and
after that, they are taken back and the process goes to the ready queue if it
still has some CPU burst(execution) time remaining.
Some of the preemptive scheduling algorithms are Round-robin, SJF
(preemptive), etc.
Non-preemptive Scheduling

In non-preemptive scheduling,

if some resource is allocated to a process then that Resource will not be taken

back until the completion of the process.

Other processes that are present in the ready queue have to wait for its turn

and it can't forcefully get the CPU.

Once the CPU is allocated to a process, then it will be held by that process until

it completes its execution or it goes in the waiting state for I/O operation.
Preemptive :
Allow process to interrupt in the mindset of CPU execution taking CPU away to
another process (Time )

Non - Preemptive :
Ensure process to relinquishes control of CPU when it finish with it’s current CPU burst
Threads:
Multithreaded model.

implicit threads.

threading issues.
Threads
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack,
and a set of registers, ( and a thread ID. ) which includes the history of execution.

A thread means a lightweight process.

Thread shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and signals.

Every thread is related to a single process, and without process, thread cannot
exist.

Every thread shows a distinct flow of control.

Threads are used in implementing web servers and network servers.

Parallel execution of the application is also possible in threads, which can be shared
among the memory multiprocessors.
Traditional ( heavyweight ) processes have a single thread of control - There is one
program counter, and one sequence of instructions that can be carried out at any given
time.
Motivation
Most software applications that run on modern computers are multithreaded.

An application typically is implemented as a separate process with several threads of


control.

A web browser might have one thread display images or text while another thread
retrieves data from the network,

for example.
A word processor may have a thread for displaying graphics, another thread for
responding to keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.
Process Thread
A process can be defined as a program in A thread can be defined as the flow of execution
execution. via the process code.

In the process, switching requires interaction with In thread switching, there is no requirement to
the operating system. interact with the operating system.

It is heavyweight. It is lightweight.
In a process, if a process is blocked due to some
reasons, then the other processes cannot be In a thread, if one thread is blocked, then the
executed until the process which is blocked will not other thread is able to do the same task.
be unblocked.

The Process consumes more resources. Thread consumes fewer resources.

Context switching requires more time in process. Context switching requires less time in thread.

The Process needs more time for termination. The Thread takes less time for termination.

The Process takes more time for creation The Thread takes less time for execution.
In terms of communication, the process is less In terms of communication, the thread is more
efficient. efficient.
In Process switching, the interface in the operating In thread switching, no need to call an operating
system is used. system.
It is Isolated. It shares memory.
In certain situations,

a single application may be required to perform several similar tasks.

For example,
⮚ a web server accepts client requests for web pages, images, sound, and so
forth. A busy web server may have several (perhaps thousands of) clients
concurrently accessing it.

If the web server ran as a traditional single-threaded process,


it would be able to service only one client at a time, and a client might have to
wait a very long time for its request to be serviced.

One solution is to have the server run as a single process that accepts requests.
When the server receives a request, it creates a separate process to service that
request.

Process creation is time consuming and resource intensive,


however. If the new process will perform the same tasks as the existing process,
why incur all that overhead?
It is generally more efficient to use one process that contains multiple
threads.

If the web-server process is multithreaded,

The server will create a separate thread that listens for client requests.

When a request is made, rather than creating another process, the server creates a
new thread to service the request and resume listening for additional requests.

Multithreaded server architecture


⮚ Most operating-system kernels are now multithreaded.

⮚ Several threads operate in the kernel, and each thread performs a specific task, such

as managing devices, managing memory, or interrupt handling.

⮚ For example, Solaris has a set of threads in the kernel specifically for interrupt

handling;

⮚ Linux uses a kernel thread for managing the amount of free memory in the system.
Benefits
The benefits of multithreaded programming can be broken down into four major categories:

1.Responsiveness :
One thread may provide rapid response while other threads are blocked or slowed down doing
intensive calculations

2. Resource sharing
Threads share the memory and the resources of the process to which they belong by default.
The benefit of sharing code and data is that it allows an application to have several different threads of
activity within the same address space.

3. Economy
Creating and managing threads ( and context switches between them ) is much faster than performing
the same tasks for processes.

4. Scalability.
The benefits of multithreading can be even greater in a multiprocessor architecture, where threads
may be running in parallel on different processing cores.

A single-threaded process can run on only one processor, regardless how many are available
User Threads and Kernel Threads
● User threads - management done by user-level threads library
● Three primary thread libraries:
● POSIX Pthreads
● Windows threads
● Java threads

● Kernel threads - Supported by the Kernel


● Examples – virtually all general purpose operating systems,
including:
● Windows
● Solaris
● Linux
● Tru64 UNIX
● Mac OS X
Multithreading Models
● Many-to-One

● One-to-One

● Many-to-Many
Many-to-One
● The many-to-one model (Figure ) maps many user-
level threads to one kernel thread
● Thread management is done by the thread library
in user space, so it is efficient
● One thread blocking causes all to block
● Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
● Few systems currently use this model
● Examples:
● Solaris Green Threads
● GNU Portable Threads
One-to-One
● The one-to-one model (Fig.) maps each user thread to a kernel thread.
● It provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call.
● It also allows multiple threads to run in parallel on multiprocessors.
● The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread.
● Because the overhead of creating kernel threads can burden the performance of an
application, most implementations of this model restrict the number of threads
supported by the system.
● Linux, along with the family of Windows operating systems, implement the one-to-
one model.
Many-to-Many Model
The many-to-many model (Figure ) multiplexes many user-level threads to a smaller or
equal number of kernel threads.

The number of kernel threads may be specific to either a particular application or a


particular machine

The many-to-many model developers can create as many user threads as necessary, and
the corresponding kernel threads can run in parallel on a multiprocessor.

Also, when a thread performs a blocking system call, the kernel can schedule another
thread for execution.
Two-level Model
● Similar to M:M, except that it allows a user thread
to be bound to kernel thread
● Examples
● IRIX
● HP-UX
● Tru64 UNIX
● Solaris 8 and earlier
Implicit threading
⮚ With the continued growth of multicore processing, applications containing
hundreds—or even thousands—of threads are looming on the horizon

⮚ One way to address the difficulties and better support the design of multithreaded

applications is to transfer the creation and management of threading from

application developers to compilers and run-time libraries.

⮚ This, termed implicit threading, is a popular trend today.


Implicit threading
Three alternative approaches for designing multithreaded programs
that can take advantage of multicore processors through implicit
threading.

⮚ Thread Pools
⮚ OpenMP
⮚ Grand Central Dispatch
Thread Pools
1. The general idea behind a thread pool is to create a number of
threads at process startup and place them into a pool, where
they sit and wait for work.

2. When a server receives a request, it awakens a thread from this


pool—if one is available—and passes it the request for service.
Once the thread completes its service, it returns to the pool and
awaits more work.

3. If the pool contains no available thread, the server waits until


one becomes free.
Thread pools offer these benefits:
1. Servicing a request with an existing thread is faster than
waiting to create a thread.

2. A thread pool limits the number of threads that exist at


any one point. This is particularly important on systems
that cannot support a large number of concurrent threads.
OpenMP
OpenMP is a set of compiler directives as well as an API for programs written in
C, C++, or FORTRAN that provides support for parallel programming in shared-
memory environments.

OpenMP identifies parallel regions as blocks of code that may run in parallel.

Application developers insert compiler directives into their code at parallel


regions, and these directives instruct the OpenMP run-time library to execute
the region in parallel.

The following C program illustrates a compiler directive above the parallel region
containing the printf() statement:
it creates as many threads are there are processing cores in the system.
Thus, for a dual-core system, two threads are created, for a quad-core system, four are
created; and so forth.
All the threads then simultaneously execute the parallel region. As each thread exits the
parallel region, it is terminated.
Grand Central Dispatch
Grand Central Dispatch (GCD)—a technology for Apple’s Mac OS X
and Ios operating systems—is a combination of extensions to the C
language, an API, and a run-time library that allows application
developers to identify sections of code to run in parallel.

Like OpenMP, GCD manages most of the details of threading.

GCD identifies extensions to the C and C++ languages known as


blocks.

A block is simply a self-contained unit of work.

It is specified by a caret ˆ inserted in front of a pair of braces { }.

A simple example of a block is shown below:


GCD schedules blocks for run-time execution by placing them on a
dispatch queue.

When it removes a block from a queue, it assigns the block to an


available thread from the thread pool it manages.

GCD identifies two types of dispatch queues: serial and concurrent.


Threading Issues
The fork() and exec() System Calls

fork() system call is used to create a separate, duplicate process.

If one thread in a program calls fork(), does the new process


duplicate all threads, or is the new process single-threaded?

Some UNIX systems have chosen to have two versions of fork() ,


one that duplicates all threads and another that duplicates only
the thread that invoked the fork() system call.
exec() system call

⮚ if a thread invokes the exec() system call, the program specified


in the parameter to exec() will replace the entire process—
including all threads.

⮚ Which of the two versions of fork() to use depends on the


application.
⮚ If exec() is called immediately after forking, then duplicating
all threads is unnecessary, as the program specified in the
parameters to exec() will replace the process.

⮚ In this instance, duplicating only the calling thread is


appropriate. If, however, the separate process does not call
exec() after forking, the separate process should duplicate all
threads.
2. Thread cancellation
Termination of the thread in the middle of its execution it is
termed as ‘thread cancellation’.

Now a thread which we want to cancel is termed as target thread.


Thread cancellation can be performed in two ways:

Asynchronous Cancellation: In asynchronous cancellation, a


thread is employed to terminate the target thread instantly.

Deferred Cancellation: In deferred cancellation, the target thread


is scheduled to check itself at regular interval whether it can
terminate itself or not.
The issue related to the target threads are listed below:

⮚ What if the resources had been allotted to the cancel target thread?

⮚ What if the target thread is terminated when it was updating the data, it was

sharing with some other thread.

Here the asynchronous cancellation of the thread where a thread immediately cancels

the target thread without checking whether it is holding any resources or not creates

troublesome.
However, in deferred cancellation, the thread that indicates the

target thread about the cancellation, the target thread

crosschecks its flag in order to confirm that it should it be

cancelled immediately or not.

The thread cancellation takes place where they can be cancelled

safely such points are termed as cancellation points by Pthreads.


Signal Handling

A signal is used in UNIX systems to notify a process that a particular event has occurred.

A signal may be received either synchronously or asynchronously

whether synchronous or asynchronous, follow the same pattern:

1. A signal is generated by the occurrence of a particular event.

2. The signal is delivered to a process.

3. Once delivered, the signal must be handled.


Examples of synchronous signals include illegal memory access and 1 division

by 0.

If a running program performs either of these actions, a signal is generated.

Synchronous signals are delivered to the same process that performed the

operation that caused the signal (that is the reason they are considered

synchronous).
When a signal is generated by an event external to a running process, that

process receives the signal asynchronously.

Examples of such signals include terminating a process with specific

keystrokes and having a timer expire.

Typically, an asynchronous signal is sent to another process.

Every signal may be handled by one of two possible handlers:

1. A default signal handler

2. A user-defined signal handler


single-threaded:

Signal handling is more convenient in the single-threaded program as the signal

would be directly forwarded to the process.

Handling signals in single-threaded programs is straightforward; signals are

always delivered to a process

Multithreaded:

However, delivering signals is more complicated in multithreaded programs

But when it comes to multithreaded program, the issue arrives to which thread

of the program the signal should be delivered.


where a process may have several threads. Where, then, should a signal be
delivered. In general, the following options exist:

⮚ Deliver the signal to the thread to which the signal applies.


⮚ Deliver the signal to every thread in the process.
⮚ Deliver the signal to certain threads in the process.
⮚ Assign a specific thread to receive all signals for the process

The standard UNIX function for delivering a signal is

kill(pid t pid, int signal)


Thread-Local Storage
We all are aware of the fact that the threads belonging to the same process

share the data of that process.

Here the issue is what if each particular thread of the process needs its own

copy of data. We will call such data thread-local storage (or TLS.)

So the specific data associated with the specific thread is referred to as thread-

specific data.
Consider a transaction processing system, here we can process each transaction
in a different thread.

To determine each transaction uniquely we will associate a unique identifier with


it. Which will help the system to identify each transaction uniquely.

Most thread libraries—including Win32 and Pthreads


provide someform of support for thread-specific data.
Java provides support as well.
Scheduler Activations

⮚ A final issue to be considered with multithreaded programs concerns


communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models .

⮚ Such coordination allows the number of kernel threads to be dynamically


adjusted to help ensure the best performance

One scheme for communication between the user-thread library and the kernel
is known as scheduler activation. It works as follows:

The kernel provides an application with a set of virtual processors (LWPs), and
the application can schedule user threads onto an available virtual processor
To the user-thread library, the LWP appears to be a virtual processor on which
the application can schedule a user thread to run.

Each LWP is attached to a kernel thread and it is kernel threads that the
operating system schedules to run on physical processors
https://www.cs.uic.edu/~jbell/CourseNotes/O
peratingSystems/4_Threads.html

https://www.tutorialandexample.com/what-is-
thread-and-types-of-thread/

https://www.tutorialspoint.com/operating_sys
tem/os_multi_threading.htm

https://www.javatpoint.com/threads-in-operat
ing-system

You might also like