Osunit 3
Osunit 3
Text Section: A Process, sometimes known as the Text Section, also includes
the current activity represented by the value of the Program Counter.
Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process during its run time.
Attributes or Characteristics of a Process
A process has the following attributes.
div block
1. Process Id: A unique identifier assigned by the operating system
2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of CPU)
5. Accounts information:
6. I/O status information: For example, devices allocated to the process,
open files, etc
8. CPU scheduling information: For example, Priority (Different processes
may have different priorities, for example
a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the
process.
Every process has its own process control block(PCB), i.e each process will have
a unique PCB. All of the above attributes are part of the PCB.
Interrupts
The interrupt is a signal emitted by hardware or software when a
process or an event needs immediate attention. It alerts the processor
to a high-priority process requiring interruption of the current working
process. In I/O devices one of the bus control lines is dedicated for this
purpose and is called the Interrupt Service Routine (ISR).
While the processor is handling the interrupts, it must inform the device
that its request has been recognized so that it stops sending the
interrupt request signal. Also, saving the registers so that the
interrupted process can be restored in the future, increases the delay
between the time an interrupt is received and the start of the execution
of the ISR. This is called Interrupt Latency.
Hardware Interrupts:
In a hardware interrupt, all the devices are connected to the Interrupt
Request Line. A single request line is used for all the n devices. To
request an interrupt, a device closes its associated switch. When a
device requests an interrupt, the value of INTR is the logical OR of the
requests from individual devices.
Job queue – It helps you to store all the processes in the system.
Ready queue – This type of queue helps you to set every process residing in
the main memory, which is ready and waiting to execute.
Device queues – It is a process that is blocked because of the absence of an I/O
device.
In the above-given Diagram,
Rectangle represents a queue.
Circle denotes the resource
Arrow indicates the flow of the process.
1. Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution. Here, the new process is
put in the ready queue and wait until it is selected for execution or it is
dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once
interrupt is completed, it should be sent back to ready queue.
Two State Process Model
Two-state process models are:
Running State
Not Running State
Running
In the Operating system, whenever a new process is built, it is entered into the
system, which should be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their
turn to execute. Each entry in the queue is a point to a specific process.
Scheduling Objectives
Here, are important objectives of Process scheduling
Maximize the number of interactive users within acceptable response times.
Achieve a balance between response and utilization.
Avoid indefinite postponement and enforce priorities.
It also should give reference to the processes holding the key resources.
Process Schedulers in Operating System
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the CPU
using time multiplexing
There are three types of process scheduler.
Long Term or job scheduler :
It brings the new process to the ‘Ready State’. It controls Degree of Multi-
programming, i.e., number of process present in ready state at any point of
time. It is important that the long-term scheduler make a careful selection of
both IO and CPU bound process. IO bound tasks are which use much of their
time in input and output operations while CPU bound processes are which
spend their time on CPU. The job scheduler increases efficiency by maintaining
a balance between the two.
Short term or CPU scheduler :
It is responsible for selecting one process from ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to
schedule it doesn’t load the process on running. Here is when all the
scheduling algorithms are used. The CPU scheduler is responsible for ensuring
there is no starvation owing to high burst time processes.
Dispatcher is responsible for loading the process selected by Short-term
scheduler on the CPU (Ready to Running State) Context switching is done by
dispatcher only. A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Medium-term scheduler :
It is responsible for suspending and resuming the process. It mainly does
swapping (moving processes from main memory to disk and vice versa).
Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring
memory to be freed up. It is helpful in maintaining a perfect balance between
the I/O bound and the CPU bound. It reduces the degree of multiprogramming.
Context Switch in OS
A context switch is an important feature of multitasking OS that can be used to
store and restore the state or context of a CPU in a PCB, so that the execution
of a process can be resumed from that very point at a later time. A context
switch allows multiple processes to share a single CPU. Some hardware
systems even employ two or more sets of processor registers, in order to avoid
context switching time.
When the scheduler switches the CPU from one process to another, the state
of the current running process is stored into the PCB and the state for the next
process is loaded from its own PCB. This is then used to set the PC, registers,
etc. Since the register and memory state is saved and restored in context
switches, they are computationally intensive. Following information is stored
during switching:
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Context switch occurs when:
A process with a higher priority than the running process enters the
ready state.
An Interrupt happens
The user mode and kernel-mode switch. Though context switching
doesn’t usually happen in this situation.
We use preemptive CPU scheduling.
Mode Switch in OS
Mode switching happens usually when a system call is made or a fault occurs
i.e., it happens when the processor privilege level is changed. A mode switch is
necessary if a user process needs to access things exclusively accessible to the
kernel. The executing process does not change during a mode switch. We can
say that a mode switch occurs so that a process context switch can take place
as only the kernel can cause a context switch.
Summary
Process scheduling schedules a process into ready, waiting, and running states.
There are two categories of process scheduling: preemptive and non-
preemptive scheduling. Job queue, ready queue, and device queue are the
three queues of process scheduling.
There are two states in the two-state model, namely, running and not running.
A scheduler handles the task of process scheduling and has three types, short-
term, long-term, and middle-term. A context switch is used to store and
restore the context in a PCB.
Send message
Receive message
Differences
The major differences between shared memory and message passing
model –
3.5 Multithreading model
3.6 Threads libraries ,threading issues
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below
Multithreading Models
Some operating systems provide a combined user-level thread and a kernel-
level thread installation. In a mutual system, multiple threads in the identical
application can run in corresponding on multiple processors and a blocking
system call no need to block the whole process. There are three types of
multithreading models which are:
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.
shown as.
Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS
resources (like open files and signals). But, like process, a thread has its own
program counter (PC), register set, and stack space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower
compared to process context switch. Process context switching requires more
overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in
a single process, then we can schedule multiple threads on multiple processor.
This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among
all threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread has
its own stack and registers.
5. Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow
some specific communication technique for communication between two
process.
6. Enhanced throughput of the system: If a process is divided into multiple
threads, and each thread function is considered as one job, then the number of
jobs completed per unit of time is increased, thus increasing the throughput of
the system.
A program counter
A register set
A stack space
Threads are not independent of each other as they share the code, data, OS
resources etc.
Types of Threads:
User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and
to cause interrupt to Kernel. Kernel doesn’t know about the user level thread
and manages them as if they were single-threaded processes.
Advantages of ULT –
Can be implemented on an OS that doesn’t support multithreading.
Simple representation since thread has only program counter, register
set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.
Limitations of ULT –
No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.
Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one)
that keeps track of all the threads in the system. In addition kernel also
maintains the traditional process table to keep track of the processes. OS
kernel provides system call to create and manage threads.
Advantages of KLT –
Since kernel has full knowledge about the threads in the system,
scheduler may decide to give more time to processes having large
number of threads.
Good for applications that frequently block.
Limitations of KLT –
Slow and inefficient.
It requires thread control block so it is an overhead.
Summary:
Each ULT has a process that keeps track of the thread using the Thread
table.
Each KLT has Thread Table (TCB) as well as the Process Table (PCB).
Difference between Process and Thread:
Thread Library
A thread library provides the programmer with an Application program
interface for creating and managing thread.
Ways of implementing thread library
There are two primary ways of implementing thread library, which are as
follows −
The first approach is to provide a library entirely in user space with
kernel support. All code and data structures for the library exist in a local
function call in user space and not in a system call.
The second approach is to implement a kernel level library supported
directly by the operating system. In this case the code and data
structures for the library exist in kernel space.
Invoking a function in the application program interface for the library
typically results in a system call to the kernel.
The main thread libraries which are used are given below −
POSIX threads − Pthreads, the threads extension of the POSIX standard,
may be provided as either a user level or a kernel level library
WIN 32 thread − The windows thread library is a kernel level library
available on windows systems.
JAVA thread − The JAVA thread API allows threads to be created and
managed directly as JAVA programs.
Threading Issues in OS
Let us now discuss the issue with the fork() system call. Consider that a
thread of the multithreaded program has invoked the fork(). So, the
fork() would create a new duplicate process. Here the issue is whether
the new duplicate process created by fork() will duplicate all the threads
of the parent process or the duplicate process would be single-threaded.
Well, there are two versions of fork() in some of the UNIX systems.
Either the fork() can duplicate all the threads of the parent process in
the child process or the fork() would only duplicate that thread from
parent process that has invoked it.
Which version of fork() must be used totally depends upon the
application.
Next system call i.e. exec() system call when invoked replaces the
program along with all its threads with the program that is specified in
the parameter to exec(). Typically the exec() system call is lined up after
the fork() system call.
Here the issue is if the exec() system call is lined up just after the fork()
system call then duplicating all the threads of parent process in the child
process by fork() is useless. As the exec() system call will replace the
entire process with the process provided to exec() in the parameter.
In such case, the version of fork() that duplicates only the thread that
invoked the fork() would be appropriate.
2. Thread cancellation
Termination of the thread in the middle of its execution it is termed as
‘thread cancellation’. Let us understand this with the help of an
example. Consider that there is a multithreaded program which has let
its multiple threads to search through a database for some information.
However, if one of the thread returns with the desired result the
remaining threads will be cancelled.
3. Signal Handling
Signal handling is more convenient in the single-threaded program as
the signal would be directly forwarded to the process. But when it
comes to multithreaded program, the issue arrives to which thread of
the program the signal should be delivered.
Well, how the signal would be delivered to the thread would be decided,
depending upon the type of generated signal.
The generated signal can be classified into two type’s
synchronous signal and asynchronous signal.
Synchronous signals are forwarded to the same process that leads to the
generation of the signal. Asynchronous signals are generated by the
event external to the running process thus the running process receives
the signals asynchronously.
So if the signal is synchronous it would be delivered to the specific
thread causing the generation of the signal. If the signal is asynchronous
it cannot be specified to which thread of the multithreaded program it
would be delivered. If the asynchronous signal is notifying to terminate
the process the signal would be delivered to all the thread of the process
The issue of an asynchronous signal is resolved up to some extent in
most of the multithreaded UNIX system. Here the thread is allowed to
specify which signal it can accept and which it cannot. However, the
Window operating system does not support the concept of the signal
instead it uses asynchronous procedure call (ACP) which is similar to the
asynchronous signal of the UNIX system.
UNIX allow the thread to specify which signal it can accept and which it
will not whereas the ACP is forwarded to the specific thread.
4.Thread Pool
When a user requests for a webpage to the server, the server creates a
separate thread to service the request. Although the server also has
some potential issues. Consider if we do not have a bound on the
number of actives thread in a system and would create a new thread for
every new request then it would finally result in exhaustion of system
resources.
We are also concerned about the time it will take to create a new
thread. It must not be that case that the time require to create a new
thread is more than the time required by the thread to service the
request and then getting discarded as it would result in wastage of CPU
time.
The solution to this issue is the thread pool. The idea is to create a finite
amount of threads when the process starts. This collection of threads is
referred to as the thread pool. The threads stay in the thread pool and
wait till they are assigned any request to be serviced.
Whenever the request arrives at the server, it invokes a thread from the
pool and assigns it the request to be serviced. The thread completes its
service and return back to the pool and wait for the next request.
If the server receives a request and it does not find any thread in the
thread pool it waits for some or the other thread to become free and
return to the pool. This much better than creating a new thread each
time a request arrives and convenient for the system that cannot handle
a large number of concurrent threads.