Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views30 pages

Osunit 3

This document discusses the concept of processes in operating systems, defining a process as a program in execution and detailing its attributes, states, and control mechanisms. It explains process scheduling, including preemptive and non-preemptive scheduling, and describes the role of process control blocks (PCBs) and context switching. Additionally, it outlines the types of schedulers and the importance of managing CPU and I/O bound processes for efficient system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views30 pages

Osunit 3

This document discusses the concept of processes in operating systems, defining a process as a program in execution and detailing its attributes, states, and control mechanisms. It explains process scheduling, including preemptive and non-preemptive scheduling, and describes the role of process control blocks (PCBs) and context switching. Additionally, it outlines the types of schedulers and the importance of managing CPU and I/O bound processes for efficient system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

#Unit 3

3.1 Process -process states ,process control block


Program vs Process
A process is a program in execution. For example, when we write a program in
C or C++ and compile it, the compiler creates binary code. The original code
and binary code are both programs. When we actually run the binary code, it
becomes a process.
A process is an ‘active’ entity, instead of a program, which is considered a
‘passive’ entity. A single program can create many processes when run
multiple times; for example, when we open a .exe or binary file multiple times,
multiple instances begin (multiple processes are created).
What does a process look like in memory?

Text Section: A Process, sometimes known as the Text Section, also includes
the current activity represented by the value of the Program Counter.
Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process during its run time.
Attributes or Characteristics of a Process
A process has the following attributes.
div block
1. Process Id: A unique identifier assigned by the operating system
2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of CPU)
5. Accounts information:
6. I/O status information: For example, devices allocated to the process,
open files, etc
8. CPU scheduling information: For example, Priority (Different processes
may have different priorities, for example
a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the
process.
Every process has its own process control block(PCB), i.e each process will have
a unique PCB. All of the above attributes are part of the PCB.

States of a Process in Operating Systems


States of a process are as following:
1. New (Create) – In this step, the process is about to be created but not yet
created, it is the program which is present in secondary memory that will
be picked up by OS to create the process.
2. Ready – New -> Ready to run. After the creation of a process, the process
enters the ready state i.e. the process is loaded into the main memory.
The process here is ready to run and is waiting to get the CPU time for its
execution. Processes that are ready for execution by the CPU are
maintained in a queue for ready processes.
3. Run – The process is chosen by CPU for execution and the instructions
within the process are executed by any one of the available CPU cores.
4. Blocked or wait – Whenever the process requests access to I/O or needs
input from the user or needs access to a critical region(the lock for which
is already acquired) it enters the blocked or wait state. The process
continues to wait in the main memory and does not require CPU. Once the
I/O operation is completed the process goes to the ready state.
5. Terminated or completed – Process is killed as well as PCB is deleted.

CPU and IO Bound Processes:


If the process is intensive in terms of CPU operations then it is called CPU
bound process. Similarly, If the process is intensive in terms of I/O operations
then it is called IO bound process.
Multiprogramming – We have many processes ready to run. There are two
types of multiprogramming:

Pre-emption – Process is forcefully removed from CPU. Pre-emption is also


called as time sharing or multitasking.
Non pre-emption – Processes are not removed until they complete the
execution.
Degree of multiprogramming –
The number of processes that can reside in the ready state at maximum
decides the degree of multiprogramming, e.g., if the degree of programming =
100, this means 100 processes can reside in the ready state at maximum.

Process Table and Process Control Block (PCB)


While creating a process the operating system performs several operations. To
identify the processes, it assigns a process identification number (PID) to each
process. As the operating system supports multi-programming, it needs to
keep track of all the processes. For this task, the process control block (PCB) is
used to track the process’s execution status. Each block of memory contains
information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc. All these information is required and
must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating
system must update information in the process’s PCB.
A process control block (PCB) contains information about the process, i.e.
registers, quantum, priority, etc. The process table is an array of PCB’s, that
means logically contains a PCB for all of the current processes in the system.
 Pointer – It is a stack pointer which is required to be saved when the
process is switched from one state to another to retain the current
position of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id known as
process ID or PID which stores the process identifier.
 Program counter – It stores the counter which contains the address of
the next instruction that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator,
base, registers and general purpose registers.
 Memory limits – This field contains the information about memory
management system used by operating system. This may include the
page tables, segment tables etc.
 Open files list – This information includes the list of files opened for a
process.

Miscellaneous accounting and status data – This field includes


information about the amount of CPU used, time constraints, jobs or
process number, etc.The process control block stores the register
content also known as execution content of the processor when it was
blocked from running. This execution content architecture enables the
operating system to restore a process’s execution context when the
process returns to the running state. When the process makes a
transition from one state to another, the operating system updates its
information in the process’s PCB. The operating system maintains
pointers to each process’s PCB in a process table so that it can access the
PCB quickly.

Interrupts
The interrupt is a signal emitted by hardware or software when a
process or an event needs immediate attention. It alerts the processor
to a high-priority process requiring interruption of the current working
process. In I/O devices one of the bus control lines is dedicated for this
purpose and is called the Interrupt Service Routine (ISR).

When a device raises an interrupt at let’s say process i, the processor


first completes the execution of instruction i. Then it loads the Program
Counter (PC) with the address of the first instruction of the ISR. Before
loading the Program Counter with the address, the address of the
interrupted instruction is moved to a temporary location. Therefore,
after handling the interrupt the processor can continue with process i+1.

While the processor is handling the interrupts, it must inform the device
that its request has been recognized so that it stops sending the
interrupt request signal. Also, saving the registers so that the
interrupted process can be restored in the future, increases the delay
between the time an interrupt is received and the start of the execution
of the ISR. This is called Interrupt Latency.
Hardware Interrupts:
In a hardware interrupt, all the devices are connected to the Interrupt
Request Line. A single request line is used for all the n devices. To
request an interrupt, a device closes its associated switch. When a
device requests an interrupt, the value of INTR is the logical OR of the
requests from individual devices.

The sequence of events involved in handling an IRQ:


a. Devices raise an IRQ.
b. The processor interrupts the program currently being executed.
c. The device is informed that its request has been recognized and
the device deactivates the request signal.
d. The requested action is performed.
e. An interrupt is enabled and the interrupted program is resumed.
Processors’ priority is encoded in a few bits of PS (Process Status
register). It can be changed by program instructions that write into
the PS. The processor is in supervised mode only while executing OS
routines. It switches to user mode before executing application
programs.

3.2.Process scheduling-Scheduling Queues, Schedulers,


Context Switch
Categories of Scheduling in OS
There are two categories of scheduling:
1. Non-preemptive: In non-preemptive, the resource can’t be taken from a
process until the process completes execution. The switching of resources
occurs when the running process terminates and moves to a waiting state.
2. Preemptive: In preemptive scheduling, the OS allocates the resources to a
process for a fixed amount of time. During resource allocation, the process
switches from running state to ready state or from waiting state to ready state.
This switching occurs as the CPU may give priority to other processes and
replace the process with higher priority with the running process.

Process Scheduling Queues


Process Scheduling Queues help you to maintain a distinct queue for each and
every process states and PCBs. All the process of the same execution state are
placed in the same queue. Therefore, whenever the state of a process is
modified, its PCB needs to be unlinked from its existing queue, which moves
back to the new state queue.
Three types of operating system queues are:

Job queue – It helps you to store all the processes in the system.
Ready queue – This type of queue helps you to set every process residing in
the main memory, which is ready and waiting to execute.
Device queues – It is a process that is blocked because of the absence of an I/O
device.
In the above-given Diagram,
 Rectangle represents a queue.
 Circle denotes the resource
 Arrow indicates the flow of the process.
1. Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution. Here, the new process is
put in the ready queue and wait until it is selected for execution or it is
dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once
interrupt is completed, it should be sent back to ready queue.
Two State Process Model
Two-state process models are:
Running State
Not Running State
Running
In the Operating system, whenever a new process is built, it is entered into the
system, which should be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their
turn to execute. Each entry in the queue is a point to a specific process.
Scheduling Objectives
Here, are important objectives of Process scheduling
Maximize the number of interactive users within acceptable response times.
Achieve a balance between response and utilization.
Avoid indefinite postponement and enforce priorities.
It also should give reference to the processes holding the key resources.
Process Schedulers in Operating System
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the CPU
using time multiplexing
There are three types of process scheduler.
Long Term or job scheduler :
It brings the new process to the ‘Ready State’. It controls Degree of Multi-
programming, i.e., number of process present in ready state at any point of
time. It is important that the long-term scheduler make a careful selection of
both IO and CPU bound process. IO bound tasks are which use much of their
time in input and output operations while CPU bound processes are which
spend their time on CPU. The job scheduler increases efficiency by maintaining
a balance between the two.
Short term or CPU scheduler :
It is responsible for selecting one process from ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to
schedule it doesn’t load the process on running. Here is when all the
scheduling algorithms are used. The CPU scheduler is responsible for ensuring
there is no starvation owing to high burst time processes.
Dispatcher is responsible for loading the process selected by Short-term
scheduler on the CPU (Ready to Running State) Context switching is done by
dispatcher only. A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Medium-term scheduler :
It is responsible for suspending and resuming the process. It mainly does
swapping (moving processes from main memory to disk and vice versa).
Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring
memory to be freed up. It is helpful in maintaining a perfect balance between
the I/O bound and the CPU bound. It reduces the degree of multiprogramming.

Context Switch in OS
A context switch is an important feature of multitasking OS that can be used to
store and restore the state or context of a CPU in a PCB, so that the execution
of a process can be resumed from that very point at a later time. A context
switch allows multiple processes to share a single CPU. Some hardware
systems even employ two or more sets of processor registers, in order to avoid
context switching time.
When the scheduler switches the CPU from one process to another, the state
of the current running process is stored into the PCB and the state for the next
process is loaded from its own PCB. This is then used to set the PC, registers,
etc. Since the register and memory state is saved and restored in context
switches, they are computationally intensive. Following information is stored
during switching:

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Context switch occurs when:
 A process with a higher priority than the running process enters the
ready state.
 An Interrupt happens
 The user mode and kernel-mode switch. Though context switching
doesn’t usually happen in this situation.
 We use preemptive CPU scheduling.

Mode Switch in OS
Mode switching happens usually when a system call is made or a fault occurs
i.e., it happens when the processor privilege level is changed. A mode switch is
necessary if a user process needs to access things exclusively accessible to the
kernel. The executing process does not change during a mode switch. We can
say that a mode switch occurs so that a process context switch can take place
as only the kernel can cause a context switch.
Summary
Process scheduling schedules a process into ready, waiting, and running states.
There are two categories of process scheduling: preemptive and non-
preemptive scheduling. Job queue, ready queue, and device queue are the
three queues of process scheduling.
There are two states in the two-state model, namely, running and not running.
A scheduler handles the task of process scheduling and has three types, short-
term, long-term, and middle-term. A context switch is used to store and
restore the context in a PCB.

Steps for Context Switching


There are several steps involves in context switching of the processes. The
following diagram represents the context switching of two processes, P1 to P2,
when an interrupt, I/O needs, or priority-based process occurs in the ready
queue of PCB.

What is the context switching in the operating system?


As we can see in the diagram, initially, the P1 process is running on the CPU to
execute its task, and at the same time, another process, P2, is in the ready
state. If an error or interruption has occurred or the process requires
input/output, the P1 process switches its state from running to the waiting
state. Before changing the state of the process P1, context switching saves the
context of the process P1 in the form of registers and the program counter to
the PCB1. After that, it loads the state of the P2 process from the ready state
of the PCB2 to the running state.

The following steps are taken when switching Process P1 to Process 2:


1. First, thes context switching needs to save the state of process P1 in the
form of the program counter and the registers to the PCB (Program
Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the
appropriate queue, such as the ready queue, I/O queue and waiting
queue.
3. After that, another process gets into the running state, or we can select
a new process from the ready state, which is to be executed, or the
process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected
process P2. It includes switching the process state from ready to running
state or from another state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of
process P2 to resume its execution at the same time point where the
system interrupt occurs.
Similarly, process P2 is switched off from the CPU so that the process P1 can
resume execution. P1 process is reloaded from PCB1 to the running state to
resume its task at the same point. Otherwise, the information is lost, and when
the process is executed again, it starts execution at the initial level.

3.3 Operations on process :creation and termination


Process Creation
A process may be created in the system for different operations. Some of the
events that lead to process creation are as follows −

 User request for process creation


 System Initialization
 Batch job initialization
 Execution of a process creation system call by a running process
A process may be created by another process using fork(). The creating process
is called the parent process and the created process is the child process. A child
process can have only one parent but a parent process may have many
children. Both the parent and child processes have the same memory image,
open files and environment strings. However, they have distinct address
spaces.

A diagram that demonstrates process creation using fork() is as follows –


Process Termination
Process termination occurs when the process is terminated The exit() system
call is used by most operating systems for process termination
Some of the causes of process termination are as follows −
 A process may be terminated after its execution is naturally completed.
This process leaves the processor and releases all its resources.
 A child process may be terminated if its parent process requests for its
termination.
 A process can be terminated if it tries to use a resource that it is not
allowed to. For example - A process can be terminated for trying to write
into a read only file.
 If an I/O failure occurs for a process, it can be terminated. For example -
If a process requires the printer and it is not working, then the process
will be terminated.
 In most cases, if a parent process is terminated then its child processes
are also terminated. This is done because the child process cannot exist
without the parent process.
 If a process requires more memory than is currently available in the
system, then it is terminated because of memory scarcity.
3.4 Inter -process communication :
Inter Process Communication (IPC
A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other
processes while a co-operating process can be affected by other
executing processes. Though one can think that those processes, which
are running independently, will execute very efficiently, in reality, there
are many situations when co-operative nature can be utilized for
increasing computational speed, convenience, and modularity. Inter-
process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The
communication between these processes can be seen as a method of
co-operation between them. Processes can communicate with each
other through both:
1. Shared Memory
2. Message passing

Shared memory system is the fundamental model of inter process


communication. In a shared memory system, in the address space region
the cooperating communicate with each other by establishing the
shared memory region
Shared memory concept works on fastest inter process communication.
If the process wants to initiate the communication and it has some data
to share, then establish the shared memory region in its address space.
After that, another process wants to communicate and tries to read the
shared data, and must attach itself to the initiating process’s shared
address space.

Message Passing provides a mechanism to allow processes to


communicate and to synchronize their actions without sharing the same
address space.
For example − Chat program on the World Wide Web.

Message passing provides two operations which are as follows −

 Send message
 Receive message

Messages sent by a process can be either fixed or variable size


For fixed size messages the system level implementation is straight
forward. It makes the task of programming more difficult.
The variable sized messages require a more system level
implementation but the programming task becomes simpler.
If process P1 and P2 want to communicate they need to send a message
to and receive a message from each other that means here a
communication link exists between them.

Differences
The major differences between shared memory and message passing
model –
3.5 Multithreading model
3.6 Threads libraries ,threading issues

Thread in Operating System


What is a Thread?
A thread is a path of execution within a process. A process can contain multiple
threads.

Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below
Multithreading Models
Some operating systems provide a combined user-level thread and a kernel-
level thread installation. In a mutual system, multiple threads in the identical
application can run in corresponding on multiple processors and a blocking
system call no need to block the whole process. There are three types of
multithreading models which are:
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.

Many to Many Relationship


In this many to many relationships model many user level threads multiplexes
to the kernel level threads of equal or smaller in numbers. The number of
kernel threads can be specific to a particular application or particular machine.
The following diagram shows the model a lot to a lot. In this model, developers
can create as many user threads if needed and the corresponding kernel
threads can run in parallel on a Multiprocessor.

Many to One Relationship


In case of many to one relationship model many user level threads are
multiplexes to the single kernel level thread. User space is used for thread
management. Whenever a thread makes a blocking system call the whole
process will be blocks. Only a single thread can access the kernel at a time just
because of that multiple threads are unable to run parallel on multiprocessor.
If the user-level thread libraries are implemented in the operating system so
that system does not sustenance them; kernel threads use many-to-one
relationship modes. Diagram of many to one relationship is given as.

One to One Relationship


In this case user level threads and kernel level threads are in one to one
relationship and this model provides more concurrency then many to many
relationship model. One to one relationship supports the parallel
multiprogramming in case of multiple threads. Drawback of this model is that
generating user thread requires the corresponding Kernel thread. OS/2,
windows 2000 and Windows NT use one to one relationship model. Diagram is

shown as.
Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS
resources (like open files and signals). But, like process, a thread has its own
program counter (PC), register set, and stack space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower
compared to process context switch. Process context switching requires more
overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in
a single process, then we can schedule multiple threads on multiple processor.
This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among
all threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread has
its own stack and registers.
5. Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow
some specific communication technique for communication between two
process.
6. Enhanced throughput of the system: If a process is divided into multiple
threads, and each thread function is considered as one job, then the number of
jobs completed per unit of time is increased, thus increasing the throughput of
the system.

Threads and its types in Operating System


Thread is a single sequence stream within a process. Threads have same
properties as of the process so they are called as light weight processes.
Threads are executed one after another but gives the illusion as if they are
executing in parallel. Each thread has different states. Each thread has

A program counter
A register set
A stack space
Threads are not independent of each other as they share the code, data, OS
resources etc.

Similarity between Threads and Processes –

Only one thread or process is active at a time


Within process both execute sequential
Both can create children
Differences between Threads and Processes –

Threads are not independent, processes are.


Threads are designed to assist each other, processes may or may not do it

Types of Threads:

User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and
to cause interrupt to Kernel. Kernel doesn’t know about the user level thread
and manages them as if they were single-threaded processes.
Advantages of ULT –
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register
set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
 Limitations of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one)
that keeps track of all the threads in the system. In addition kernel also
maintains the traditional process table to keep track of the processes. OS
kernel provides system call to create and manage threads.
Advantages of KLT –
 Since kernel has full knowledge about the threads in the system,
scheduler may decide to give more time to processes having large
number of threads.
 Good for applications that frequently block.
 Limitations of KLT –
 Slow and inefficient.
 It requires thread control block so it is an overhead.
 Summary:
 Each ULT has a process that keeps track of the thread using the Thread
table.
 Each KLT has Thread Table (TCB) as well as the Process Table (PCB).
Difference between Process and Thread:
Thread Library
A thread library provides the programmer with an Application program
interface for creating and managing thread.
Ways of implementing thread library
There are two primary ways of implementing thread library, which are as
follows −
 The first approach is to provide a library entirely in user space with
kernel support. All code and data structures for the library exist in a local
function call in user space and not in a system call.
 The second approach is to implement a kernel level library supported
directly by the operating system. In this case the code and data
structures for the library exist in kernel space.
 Invoking a function in the application program interface for the library
typically results in a system call to the kernel.
The main thread libraries which are used are given below −
 POSIX threads − Pthreads, the threads extension of the POSIX standard,
may be provided as either a user level or a kernel level library
 WIN 32 thread − The windows thread library is a kernel level library
available on windows systems.
 JAVA thread − The JAVA thread API allows threads to be created and
managed directly as JAVA programs.

Threading Issues in OS

There are several threading issues when we are in a multithreading


environment. In this section, we will discuss the threading issues with
system calls, cancellation of thread, signal handling, thread pool and
thread-specific data
Along with the threading issues, we will also discuss how these issues
can be deal or resolve to retain the benefit of the multithreaded
programming environment.
Threading Issues in OS
1. System Calls
2. Thread Cancellation
3. Signal Handling
4. Thread Pool
5. Thread Specific Data

1. The fork() and exec() System Calls


The fork() and exec() are the system calls. The fork() call creates a
duplicate process of the process that invokes fork(). The new duplicate
process is called child process and process invoking the fork() is called
the parent process. Both the parent process and the child process
continue their execution from the instruction that is just after the fork().

Let us now discuss the issue with the fork() system call. Consider that a
thread of the multithreaded program has invoked the fork(). So, the
fork() would create a new duplicate process. Here the issue is whether
the new duplicate process created by fork() will duplicate all the threads
of the parent process or the duplicate process would be single-threaded.

Well, there are two versions of fork() in some of the UNIX systems.
Either the fork() can duplicate all the threads of the parent process in
the child process or the fork() would only duplicate that thread from
parent process that has invoked it.
Which version of fork() must be used totally depends upon the
application.

Next system call i.e. exec() system call when invoked replaces the
program along with all its threads with the program that is specified in
the parameter to exec(). Typically the exec() system call is lined up after
the fork() system call.

Here the issue is if the exec() system call is lined up just after the fork()
system call then duplicating all the threads of parent process in the child
process by fork() is useless. As the exec() system call will replace the
entire process with the process provided to exec() in the parameter.

In such case, the version of fork() that duplicates only the thread that
invoked the fork() would be appropriate.

2. Thread cancellation
Termination of the thread in the middle of its execution it is termed as
‘thread cancellation’. Let us understand this with the help of an
example. Consider that there is a multithreaded program which has let
its multiple threads to search through a database for some information.
However, if one of the thread returns with the desired result the
remaining threads will be cancelled.

Now a thread which we want to cancel is termed as target thread.


Thread cancellation can be performed in two ways:

 Asynchronous Cancellation: In asynchronous cancellation, a thread is


employed to terminate the target thread instantly.

 Deferred Cancellation: In deferred cancellation, the target thread is


scheduled to check itself at regular interval whether it can terminate
itself or not.

The issue related to the target threads are listed below:


 What if the resources had been allotted to the cancel target thread?
 What if the target thread is terminated when it was updating the data, it
was sharing with some other thread.
Here the asynchronous cancellation of the thread where a thread
immediately cancels the target thread without checking whether it is
holding any resources or not creates troublesome.

However, in deferred cancellation, the thread that indicates the target


thread about the cancellation, the target thread crosschecks its flag in
order to confirm that it should it be cancelled immediately or not. The
thread cancellation takes place where they can be cancelled safely such
points are termed as cancellation points by Pthreads.

3. Signal Handling
Signal handling is more convenient in the single-threaded program as
the signal would be directly forwarded to the process. But when it
comes to multithreaded program, the issue arrives to which thread of
the program the signal should be delivered.

Let’s say the signal would be delivered to:

 All the threads of the process.


 To some specific threads in a process.
 To the thread to which it applies
 Or you can assign a thread to receive all the signals.

Well, how the signal would be delivered to the thread would be decided,
depending upon the type of generated signal.
The generated signal can be classified into two type’s
synchronous signal and asynchronous signal.

Synchronous signals are forwarded to the same process that leads to the
generation of the signal. Asynchronous signals are generated by the
event external to the running process thus the running process receives
the signals asynchronously.
So if the signal is synchronous it would be delivered to the specific
thread causing the generation of the signal. If the signal is asynchronous
it cannot be specified to which thread of the multithreaded program it
would be delivered. If the asynchronous signal is notifying to terminate
the process the signal would be delivered to all the thread of the process
The issue of an asynchronous signal is resolved up to some extent in
most of the multithreaded UNIX system. Here the thread is allowed to
specify which signal it can accept and which it cannot. However, the
Window operating system does not support the concept of the signal
instead it uses asynchronous procedure call (ACP) which is similar to the
asynchronous signal of the UNIX system.

UNIX allow the thread to specify which signal it can accept and which it
will not whereas the ACP is forwarded to the specific thread.

4.Thread Pool

When a user requests for a webpage to the server, the server creates a
separate thread to service the request. Although the server also has
some potential issues. Consider if we do not have a bound on the
number of actives thread in a system and would create a new thread for
every new request then it would finally result in exhaustion of system
resources.
We are also concerned about the time it will take to create a new
thread. It must not be that case that the time require to create a new
thread is more than the time required by the thread to service the
request and then getting discarded as it would result in wastage of CPU
time.
The solution to this issue is the thread pool. The idea is to create a finite
amount of threads when the process starts. This collection of threads is
referred to as the thread pool. The threads stay in the thread pool and
wait till they are assigned any request to be serviced.
Whenever the request arrives at the server, it invokes a thread from the
pool and assigns it the request to be serviced. The thread completes its
service and return back to the pool and wait for the next request.
If the server receives a request and it does not find any thread in the
thread pool it waits for some or the other thread to become free and
return to the pool. This much better than creating a new thread each
time a request arrives and convenient for the system that cannot handle
a large number of concurrent threads.

5. Thread Specific data


We all are aware of the fact that the threads belonging to the same
process share the data of that process. Here the issue is what if each
particular thread of the process needs its own copy of data. So the
specific data associated with the specific thread is referred to as thread-
specific data
Consider a transaction processing system, here we can process each
transaction in a different thread. To determine each transaction
uniquely we will associate a unique identifier with it. Which will help the
system to identify each transaction uniquely.
As we are servicing each transaction in a separate thread. So we can use
thread-specific data to associate each thread to a specific transaction
and its unique id. Thread libraries such as Win32, Pthreads and Java
support to thread-specific data.

So these are threading issues that occur in the multithreaded


programming environment. We have also seen how these issues can be
resolved.

You might also like