Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views33 pages

Unit IV Spos - Operating System

The document outlines the evolution of operating systems from manual operation in the 1940s to modern systems like Windows and Linux, highlighting key advancements such as batch processing, multiprogramming, and graphical user interfaces. It details the essential services and functions of operating systems, including process management, memory management, file management, and security. Additionally, it explains process states and the structure of the Process Control Block (PCB) used to manage process information.

Uploaded by

vina.varade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views33 pages

Unit IV Spos - Operating System

The document outlines the evolution of operating systems from manual operation in the 1940s to modern systems like Windows and Linux, highlighting key advancements such as batch processing, multiprogramming, and graphical user interfaces. It details the essential services and functions of operating systems, including process management, memory management, file management, and security. Additionally, it explains process states and the structure of the Process Control Block (PCB) used to manage process information.

Uploaded by

vina.varade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Unit IV: Operating System

Evolution of operating systems

The evolution of operating systems (OS) is marked by four generations, beginning


with manual computer operation and progressing to today's sophisticated
systems like Windows, macOS, and Linux, which offer multitasking, networking,
and user-friendly graphical interfaces. This journey was driven by advances in
computer hardware, such as integrated circuits, and the growing need for more
efficient and interactive ways for users to control and utilize computer resources.
Early Days (No OS or Manual Operation)

1940s-1950s: The earliest computers had no operating systems. Operators had to


manually load programs and instructions, a process that was slow and prone to
errors, according to Scaler Topics.
First Generation (Manual Systems)

1950s: The first operating systems emerged, such as GM-NAA I/O in 1956. These
systems were designed to handle simple tasks.
Focus: The goal was to automate processes and control hardware, often by
managing groups of jobs in a sequence called a batch.

Second Generation (Batch Processing)

 Mid-1950s-Mid-1960s: This era saw the rise of batch systems that could
process jobs in groups, reducing the need for constant human intervention.
 Innovations: Simple Job Control Languages were introduced to automate
task execution within these batches.
Third Generation (Multiprogramming and Time-Sharing)

Mid-1960s-1980s: The third generation was characterized by major


advancements, including multiprogramming and time-sharing.
Multiprogramming: Allowed multiple programs to run concurrently on a single
computer, significantly improving resource utilization.
Time-Sharing:
Enabled multiple users to interact with the system simultaneously, paving the way
for modern multi-user operating systems.
Fourth Generation (Personal Computers and GUIs)

1980s-Present: The invention of the personal computer led to the fourth


generation.
Key Features: This era brought graphical user interfaces (GUIs), making
computers much more user-friendly and accessible.
Examples: Operating systems like MS-DOS, Windows, macOS, and Linux became
popular, and new capabilities such as multitasking and advanced networking were
introduced.

Operating system services:

An operating system is software that acts as an intermediary between the user


and computer hardware. It is a program with the help of which we are able to run
various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.
The OS coordinates the use of the hardware and application programs for various
users. It provides a platform for other application programs to work. The
operating system is a set of special programs that run on a computer system that
allows it to work properly. It controls input-output devices, execution of
programs, managing files, etc.
Services of Operating System
 Program execution
 Input Output Operations
 Communication between Process
 File Management
 Memory Management
 Process Management
 Security and Privacy
 Resource Management
 User Interface
 Networking
 Error handling
 Time Management
1. Program Execution: It is the Operating System that manages how a program is
going to be executed. It loads the program into the memory after which it is
executed. The order in which they are executed depends on the CPU Scheduling
Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the
Operating System also handles deadlock i.e. no two processes come for execution
at the same time. The Operating System is responsible for the smooth execution
of both user and system programs. The Operating System utilizes various
resources available for the efficient running of all types of functionalities.
2. Input Output Operations: Operating System manages the input-output
operations and establishes communication between the user and device drivers.
Device drivers are software that is associated with hardware that is being
managed by the OS so that the sync between the devices works properly. It also
provides access to input-output devices to a program when needed.
3. Communication Between Processes: The Operating system manages the
communication between processes. Communication between processes includes
data transfer among them. If the processes are not on the same computer but
connected through a computer network, then also their communication is
managed by the Operating System itself.
4. File Management: The operating system helps in managing files also. If a
program needs access to a file, it is the operating system that grants access. These
permissions include read-only, read-write, etc. It also provides a platform for the
user to create and delete files. The Operating System is responsible for making
decisions regarding the storage of all types of data or files, i.e, floppy disk/hard
disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
5. Memory Management: Let's understand memory management by OS in simple
way. Imagine a cricket team with limited number of player . The team manager
(OS) decide whether the upcoming player will be in playing 11 ,playing 15 or will
not be included in team , based on his performance . In the same way, OS first
check whether the upcoming program fulfil all requirement to get memory space
or not ,if all things good, it checks how much memory space will be sufficient for
program and then load the program into memory at certain location. And thus , it
prevents program from using unnecessary memory.
6. Process Management: Let's understand the process management in unique
way. Imagine, our kitchen stove as the (CPU) where all cooking(execution) is really
happen and chef as the (OS) who uses kitchen-stove(CPU) to cook different
dishes(program). The chef(OS) has to cook different dishes(programs) so he
ensure that any particular dish(program) does not take long time(unnecessary
time) and all dishes(programs) gets a chance to cooked(execution) . The chef(OS)
basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
7. Security and Privacy
 Security : OS keep our computer safe from an unauthorized user by adding
security layer to it. Basically, Security is nothing but just a layer of
protection which protect computer from bad guys like viruses and hackers.
OS provide us defenses like firewalls and anti-virus software and ensure
good safety of computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like
having a lock on our door, where only you can enter and other are not
allowed . Basically , it respect our secrets and provide us facility to keep it
safe.
8. Resource Management: System resources are shared between various
processes. It is the Operating system that manages resource sharing. It also
manages the CPU time among processes using CPU Scheduling Algorithms. It also
helps in the memory management of the system. It also controls input-output
devices. The OS also ensures the proper use of all the resources available by
deciding which resource to be used by whom.
9. User Interface: User interface is essential and all operating systems provide it.
Users either interacts with the operating system through the command-line
interface or graphical user interface or GUI. The command interpreter executes
the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
10. Networking: This service enables communication between devices on a
network, such as connecting to the internet, sending and receiving data packets
and managing network connections.
11. Error Handling: The Operating System also handles the error occurring in the
CPU, in Input-Output devices, etc. It also ensures that an error does not occur
frequently and fixes the errors. It also prevents the process from coming to a
deadlock. It also looks for any type of error or bugs that can occur while any task.
The well-secured OS sometimes also acts as a countermeasure for preventing any
sort of breach of the Computer System from any external source and probably
handling them.
12. Time Management: Imagine traffic light as (OS), which indicates all the
cars(programs) whether it should be stop(red)=>(simple queue),
start(yellow)=>(ready queue),move(green)=>(under execution) and this light
(control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move
smoothly without traffic.
Function of operating system:

An operating system's primary function is to act as an intermediary between


hardware and software, managing computer resources efficiently and providing a
user-friendly platform for running applications. Key functions include process
management (handling program execution), memory management (allocating
and deallocating memory), file management (organizing and storing files), device
management (controlling hardware devices), and providing a user interface. It
also ensures system security, monitors performance, and handles input/output
(I/O) operations

Fig: Function of operating system:


Core Functions of an Operating System

1. Process Management: The OS manages the execution of programs


(processes), allocating CPU time, synchronizing processes, and preventing
conflicts to ensure orderly operation.
2. Memory Management: It allocates and deallocates memory to running
programs, ensuring each process has the necessary memory space and that
memory is used efficiently.
3. File Management: The OS organizes and stores files in directories and
folders on storage devices, handling tasks like creating, opening, closing,
and deleting files.

4. Device Management: It manages all hardware devices, such as printers,


keyboards, and network cards, using device drivers to facilitate
communication between the hardware and software.

5. User Interface: The OS provides a user interface, such as a graphical user


interface (GUI) or a command-line interface (CLI), allowing users to interact
with the computer.

6. Security: It protects the system and user data from unauthorized access
and ensures data integrity through measures like access controls and
passwords.

7. Input/Output (I/O) Management: The OS handles the flow of data


between the computer and its peripheral devices, ensuring efficient input
and output operations.

8. System Performance Control: The OS monitors the overall system


performance, tracking response times and resource usage to optimize
performance and identify issues.

9. Error Detection: It includes features to detect faults and errors within the
system, helping to prevent system failures.

10.Coordination: The OS coordinates between various software and users,


assigning resources and ensuring all components work together
harmoniously.

Process Management

What is Process?

A process is a program under execution that consists of a number of elements


including, program code and a set of data. To execute a program, a process has to
be created for that program. Here the process may or may not run but if it is in a
condition of running then that has to be maintained by the OS for appropriate
progress of the process to be gained.

The 5-state process model consists of New, Ready, Running, Blocked (or Waiting),
and Exit states, representing a process's lifecycle in memory and execution. The 7-
state model enhances this by splitting the Blocked and Ready states into two
further states each when a process is moved to secondary storage (swapped out),
creating Blocked/Suspend and Ready/Suspend states, thereby providing more
granular control for systems using virtual memory

5-State Process Model:


This model describes the essential stages of a process:

 New: The process is being created by the operating system.


 Ready: The process has all its resources (like memory) and is waiting to be
assigned to a CPU for execution.
 Running: The CPU is currently executing the process.
 Blocked/Wait: The process is waiting for some event to occur, such as I/O
operation to complete or a resource to become available, and is still in
main memory.
 Exit/Terminate: The process has finished its execution and is being
terminated by the OS, with its resources being freed

7-State Process Model


This model introduces two additional states to better manage processes that are
swapped out of main memory to secondary storage (like a hard disk).
 New
 Ready
 Running
 Blocked (or Wait)
 Exit (or Terminate)
 Ready/Suspend: The process is in secondary memory but is ready to run as
soon as it is brought back into main memory.
 Blocked/Suspend: The process is in secondary memory and is waiting for
an event, but it is also swapped out of main memory.

Fig: 7-State Process Model

Process Management for a single tasking or batch processing system is easy as


only one process is active at a time. With multiple processes (multiprogramming
or multitasking) being active, the process management becomes complex as a
CPU needs to be efficiently utilized by multiple processes. Multiple active
processes can may share resources like memory and may communicate with each
other. This further makes things complex as an Operating System has to do
process synchronization.
Please remember the main advantages of having multiprogramming are system
responsiveness and better CPU utilization. We can run multiple processes in
interleaved manner on a single CPU. For example, when the current process is
getting busy with IO, we assign CPU to some other process.

Differentiate between 5 state and 7 state process models.

A process is a program in execution and it is more than a program code called as


text section and this concept works under all the operating system because all the
task perform by the operating system needs a process to perform the task

The process executes when it changes state. The state of a process is defined by
the current activity of the process. It is important to know that only one process
can be running on any processor at any instant. Many processes may be ready
and waiting.

Five state process Model

The states present in the 5-state model are as follows −

 New − When a new process is created, It enter into the new state. Then it
tries to load into RAM.
 Ready − The processes that are loaded on RAM and waiting for CPU are in
ready state.
 Running − The processes that are running on the CPU are in running state.
 If the process is running in its critical section, then other processes need to
wait in the ready state.
 Blocked − All processes that are leaving the CPU and moving to the waiting
state are in the blocked state. When the CPU becomes free, processes from
the blocked state again move to the ready state, and from ready to Running
state.
 Exit / Terminated − A process that is terminated from CPU and RAM is in
the terminate state
Fig: Five state process Model

Seven state process model

The states present in seven state models are as follows −

 New − Contains the processes that are newly coming for execution.
 Ready − Contains the processes that are present in main memory and
available for execution.
 Running − Contains the process that is running or executing.
 Exit − Contains the processes that complete its execution.
 Blocked − Contains the processes that are present in main memory and
awaiting an event to occur.
 Blocked Suspend − It contains the process present in secondary memory
and awaits an event to occur.
 Ready Suspend − Contains the processes that are present in secondary
memory but is available for execution as soon as it is loaded into main
memory.
Fig: Seven state process model

Process Control Block (PCB):

A Process Control Block (PCB) is a data structure used by an operating system to


store and manage all information about a specific process, acting as an "ID card"
for it. The PCB holds key details like the process state, program counter, CPU
registers, memory management information, and I/O status,

A Process Control Block (PCB) is a data structure used by the operating system to
keep track of process information and manage execution.

 Each process is given a unique Process ID (PID) for identification.

 The PCB stores details such as process state, program counter, stack
pointer, open files, and scheduling info.

 During a state transition, the OS updates the PCB with the latest execution
data.
 It also includes register values, CPU quantum, and process priority.

 The Process Table is an array of PCBs that maintains information for all
active processes.

Structure of the Process Control Block:

A Process Control Block (PCB) is a data structure used by the operating system to
manage information about a process. The process control keeps track of many
important pieces of information needed to manage processes efficiently. The
diagram helps explain some of these key data items.

Fig: Process Control Block

 Pointer: It is a stack pointer that is required to be saved when the process is


switched from one state to another to retain the current position of the
process.
 Process state: It stores the respective state of the process.

 Process number: Every process is assigned a unique id known as process ID


or PID which stores the process identifier.

 Program counter: Program Counter stores the counter, which contains the
address of the next instruction that is to be executed for the process.

 Register: Registers in the PCB, it is a data structure. When a processes is


running and it's time slice expires, the current value of process specific
registers would be stored in the PCB and the process would be swapped
out. When the process is scheduled to be run, the register values is read
from the PCB and written to the CPU registers. This is the main purpose of
the registers in the PCB.

 Memory limits: This field contains the information about memory


management system used by the operating system. This may include page
tables, segment tables, etc.

 List of Open files: This information includes the list of files opened for a
process.

Advantages of Process Control Block (PCB)

1. Stores Process Details

 Holds vital data like process ID, process state, program counter, CPU
registers, memory limits, etc.

 Acts as the identity card of the process within the OS.

2. Helps Resume Processes

 During context switching, the PCB stores the exact execution point and
environment.

 Allows the process to resume seamlessly without restarting.

3. Ensures Smooth Execution


 Keeps track of everything a process needs (CPU state, memory, files,
devices).

 Enables the OS to manage processes systematically and without disruption.

4. Facilitates Context Switching

 PCB is used to save and restore the state of processes when switching
between them.

 Ensures efficiency in multitasking and responsiveness in real-time systems.

5. Aids in Scheduling

 Stores priority, scheduling policy and time slice information.

 Helps the scheduler decide which process to run next.

6. Manages Resource Allocation

 Keeps records of open files, I/O devices, allocated memory per process.

 Enables effective resource tracking and avoidance of conflicts.

Disadvantages of Process Control Block (PCB)

 Uses More Memory : Each process needs its own PCB, so having many
processes can consume a lot of memory.

 Slows Context Switching : During context switching , the system has to


update the PCB of the old process and load the PCB of the new one, which
takes time and affects performance.

 Security Risks : If the PCB is not well-protected, someone could access or


modify it, causing security problems for processes.

Thread

In an operating system, a thread is a lightweight unit of execution within a


process that allows for concurrency and multitasking within a single application.
Unlike separate processes, threads within the same process share the same
memory space, code, and resources, but each thread has its own program
counter, registers, and stack. This shared resource model makes threads more
efficient than processes, as they can be created and switched between tasks more
rapidly, improving application responsiveness and system resource utilization.

Fig: Therad Type

Key Characteristics

Lightweight Process: Threads are often called "lightweight processes" because


they have their own execution context but are less resource-intensive to create
and manage than full processes.
Shared Resources: Threads within a process share the same memory space (code,
data, heap) and other resources like open files.
Independent Execution Context:
Each thread maintains its own program counter, register set, and stack space,
allowing for independent execution flow.

How Threads Work

Concurrency: Threads enable multiple activities to run "concurrently" or appear


to run at the same time on a single-core CPU through rapid switching. On multi-
core systems, true parallel execution is possible.
Responsiveness:
Threads improve application responsiveness by allowing background tasks, such
as I/O operations or network communication, to run without blocking the main
user interface thread.
Efficiency:
Creating and switching between threads requires fewer operating system
resources than doing so for multiple processes.
Example: A Web Browser
Imagine a web browser application, which is a process.

Main Thread: Handles user interactions, like clicking links.


Other Threads: One thread might download images in the background. Another
could spell-check your typing or autosave your work.
All these threads run within the same browser process, sharing the browser's
memory and resources, allowing the browser to remain responsive while
performing multiple tasks simultaneously
Therad lifecycle:
1. Creation: The first stage in the lifecycle of a thread is its creation. In most
programming languages and environments, threads are created by
instantiating a thread object or invoking a thread creation function. During
creation, you specify the code or function that the thread will execute.
2. Ready/Runnable: After a thread is created, it enters the "ready" or
"runnable" state. In this state, the thread is ready to run, but the operating
system scheduler has not yet selected it to execute on the CPU. Threads in
the ready state are typically waiting for the scheduler to allocate CPU time
to them.
3. Running: When the scheduler selects a thread from the pool of ready
threads and allocates CPU time to it, the thread enters the "running" state.
In this state, the thread's code is being executed on the CPU. A running
thread will continue to execute until it either voluntarily yields the CPU
(e.g., through sleep or wait operations) or is preempted by a higher-priority
thread.
4. Blocked/Waiting: Threads can enter the "blocked" or "waiting" state when
they are waiting for some event to occur, such as I/O operations,
synchronization primitives (e.g., locks or semaphores), or signals from other
threads. When a thread is blocked, it is not eligible to run until the event it
is waiting for occurs.
5. Termination: Threads can terminate either voluntarily or involuntarily.
Voluntary termination occurs when a thread completes its execution or
explicitly calls a termination function. Involuntary termination can happen
due to errors (e.g., segmentation faults) or signals received from the
operating system.
6. Dead: Once a thread has terminated, it enters the "dead" state. In this
state, the thread's resources (such as memory and handles) are
deallocated, and it no longer exists as an active entity in the system. Dead
threads cannot be restarted or resumed.
The three main multithreading models in an operating system are Many-to-One,
One-to-One, and Many-to-Many, which differ in how user-level threads are
mapped to kernel-level threads. The Many-to-One model maps multiple user
threads to a single kernel thread, which is efficient but limits parallelism and can
block other threads if one fails. The One-to-One model maps each user thread to
a separate kernel thread, enabling true parallelism but incurring higher overhead
and restrictions on the number of threads. The Many-to-Many model offers a
balance by mapping multiple user threads to multiple kernel threads, providing
flexibility and better resource utilization. 1. Many-to-One Model

 Mapping: Multiple user-level threads are mapped to a single kernel-level


thread.
 Pros: Efficient thread management, as it is done in user space without
kernel intervention.
 Cons:
o Does not allow for true concurrency because only one user thread
can execute on the kernel thread at a time.
o If a single user thread makes a blocking system call or crashes, it
blocks the entire kernel thread, halting all other threads in the
process.
o Cannot effectively use multi-core processors.
o
2. One-to-One Model

Mapping: Each user-level thread has its own corresponding kernel-level thread.
Pros: Allows for true parallelism, enabling multiple threads to run concurrently
on different processors or cores.
Cons: Creating a user thread requires creating a new kernel thread, which adds
overhead. There can be a significant burden on the system from too many kernel
threads, which can restrict the number of user threads that can be created.

3. Many-to-Many Model

Mapping: Multiple user threads are mapped to a smaller or equal number of


kernel threads.
Pros: Combines the benefits of the other two models, offering flexibility and
efficient resource management. Allows for true parallelism by mapping multiple
user threads to multiple kernel threads. Blocking system calls on one user thread
do not block other threads in the process.
Multi threading- It is a process of multiple threads executes at same time.

There are two main threading models in process management: user-level threads
and kernel-level threads.

User-level threads: In this model, the operating system does not directly support
threads. Instead, threads are managed by a user-level thread library, which is part
of the application. The library manages the threads and schedules them on
available processors. The advantages of user-level threads include greater
flexibility and portability, as the application has more control over thread
management. However, the disadvantage is that user-level threads are not as
efficient as kernel-level threads, as they rely on the application to manage thread
scheduling.

Kernel-level threads: In this model, the operating system directly supports threads
as part of the kernel. Each thread is a separate entity that can be scheduled and
executed independently by the operating system. The advantages of kernel-level
threads include better performance and scalability, as the operating system can
schedule threads more efficiently. However, the disadvantage is that kernel-level
threads are less flexible and portable than user-level threads, as they are
managed by the operating system.
There are also hybrid models that combine elements of both user-level and
kernel-level threads. For example, some operating systems use a hybrid model
called the "two-level model", where each process has one or more user-level
threads, which are mapped to kernel-level threads by the operating system.

Overall, the choice of threading model depends on the requirements of the


application and the capabilities of the underlying operating system.

Here are some advantages and disadvantages of each threading model:

User-level threads:

Advantages:

Greater flexibility and control: User-level threads provide more control over
thread management, as the thread library is part of the application. This allows
for more customization and control over thread scheduling.

Portability: User-level threads can be more easily ported to different operating


systems, as the thread library is part of the application.

Disadvantages:

User-level threads can actually be faster for context switching because switching
between ULTs happens entirely in user space, without invoking the kernel. The
inefficiency appears when:

1. A ULT makes a blocking system call, which blocks the whole process.

2. Parallelism is needed on multiple CPUs, since the kernel is unaware of ULTs.

Kernel-level threads:

Advantages:

 Kernel threads provide true parallelism and better scalability across


multiple processors, but per-operation (creation, context switch) overhead
is higher due to system calls and mode switching. So, kernel threads are not
always “faster” - they are slower for lightweight operations compared to
ULTs, but better for multi-core execution and I/O blocking
scenarios.Greater parallelism: Kernel-level threads can be scheduled on
multiple processors, which allows for greater parallelism and better use of
available resources.

Disadvantages:

Less flexibility and control: Kernel-level threads are managed by the operating
system, which provides less flexibility and control over thread management
compared to user-level threads.

Less portability: Kernel-level threads are more tightly coupled to the operating
system, which can make them less portable to different operating systems.

Hybrid models:

Advantages:

Combines advantages of both models: Hybrid models combine the advantages of


user-level and kernel-level threads, providing greater flexibility and control while
also improving performance.More scalable: Hybrid models can scale to larger
numbers of threads and processors, which allows for better use of available
resources.Hybrid models reduce blocking issues and allow concurrency without
excessive kernel overhead.

Disadvantages:

More complex: Hybrid models are more complex than either user-level or kernel-
level threading, which can make them more difficult to implement and
maintain.Requires more resources: Hybrid models require more resources than
either user-level or kernel-level threading, as they require both a thread library
and kernel-level support.Many operating systems support kernel thread and user
thread in a combined way. Example of such system is Solaris. Multi threading
model are of three types.

Process Control system calls:


Fig: Process Control system calls

Process control system calls are a category of operating system services that
manage the lifecycle and attributes of a process, allowing applications to create,
terminate, suspend, and resume processes, and to synchronize their execution.
Key examples include fork() and exec() (Unix/Linux) or CreateProcess() (Windows)
for process creation, and exit() (Unix/Linux) or ExitProcess() (Windows) for
termination.
What They Do
Process control system calls perform actions such as:

Creation: Using calls like fork() or CreateProcess() to generate a new process.


Execution: Employing exec() or CreateProcess() to load and run a new program.
Termination: Ending a process's execution using exit() or ExitProcess().
Suspension and Resumption: Pausing a process or bringing it back to a running
state.
Synchronization: Managing the timing and interaction between different
processes, often using calls like wait() or WaitForSingleObject().
Attribute Management: Getting and setting properties of a process, such as its
ID.

Examples of Common Process Control System Calls


fork(): (Unix/Linux) Duplicates the calling process, creating a new child process
with its own memory space but shared properties.
exec(): (Unix/Linux) Replaces the current process's image with a new one,
effectively launching a new program.
exit(): (Unix/Linux) Terminates the calling process's execution normally.
wait(): (Unix/Linux) Suspends the parent process until a child process finishes.
CreateProcess(): (Windows) Creates a new process and its primary thread.
ExitProcess(): (Windows) Terminates a process.
WaitForSingleObject(): (Windows) Waits for a process to change state

CPU-Bound vs I/O-Bound Processes

A CPU-bound process requires more CPU time or spends more time in the running
state. An I/O-bound process requires more I/O time and less CPU time. An I/O-
bound process spends more time in the waiting state. Process planning is an
integral part of the process management operating system. It refers to the
mechanism used by the operating system to determine which process to run next.
The goal of process scheduling is to improve overall system performance by
maximizing CPU utilization, minimizing throughput time, and improving system
response time.
Process Management Tasks
Process management is a key part in operating systems with multi-programming
or multitasking.
 Process Creation and Termination : Process creation involves creating a
Process ID, setting up Process Control Block, etc. A process can be
terminated either by the operating system or by the parent process.
Process termination involves clearing all resources allocated to it.
 CPU Scheduling : In a multiprogramming system, multiple processes need
to get the CPU. It is the job of Operating System to ensure smooth and
efficient execution of multiple processes.
 Deadlock Handling : Making sure that the system does not reach a state
where two or more processes cannot proceed due to cyclic dependency on
each other.
 Inter-Process Communication : Operating System provides facilities such as
shared memory and message passing for cooperating processes to
communicate.
 Process Synchronization : Process Synchronization is the coordination of
execution of multiple processes in a multiprogramming system to ensure
that they access shared resources (like memory) in a controlled and
predictable manner.

Process Operations

Please remember a process goes through different states before termination and
these state changes require different operations on processes by an operating
system. These operations include process creation, process scheduling, execution
and killing the process. Here are the key process operations:

Process Management for a single tasking or batch processing system is easy as


only one process is active at a time. With multiple processes (multiprogramming
or multitasking) being active, the process management becomes complex as a
CPU needs to be efficiently utilized by multiple processes. Multiple active
processes can may share resources like memory and may communicate with each
other. This further makes things complex as an Operating System has to do
process synchronization.

Please remember the main advantages of having multiprogramming are system


responsiveness and better CPU utilization. We can run multiple processes in
interleaved manner on a single CPU. For example, when the current process is
getting busy with IO, we assign CPU to some other process.

Process Management Tasks


Process management is a key part in operating systems with multi-programming
or multitasking.
 Process Creation and Termination : Process creation involves creating a
Process ID, setting up Process Control Block, etc. A process can be
terminated either by the operating system or by the parent process.
Process termination involves clearing all resources allocated to it.
 CPU Scheduling : In a multiprogramming system, multiple processes need
to get the CPU. It is the job of Operating System to ensure smooth and
efficient execution of multiple processes.
 Deadlock Handling : Making sure that the system does not reach a state
where two or more processes cannot proceed due to cyclic dependency on
each other.
 Inter-Process Communication : Operating System provides facilities such as
shared memory and message passing for cooperating processes to
communicate.
 Process Synchronization : Process Synchronization is the coordination of
execution of multiple processes in a multiprogramming system to ensure
that they access shared resources (like memory) in a controlled and
predictable manner.

Process Operations

Process Creation
Process creation in an operating system (OS) is the act of generating a new
process. This new process is an instance of a program that can execute
independently.
Scheduling
Once a process is ready to run, it enters the "ready queue." The scheduler's job is
to pick a process from this queue and start its execution.
Execution
Execution means the CPU starts working on the process. During this time, the
process might:
 Move to a waiting queue if it needs to perform an I/O operation.
 Get blocked if a higher-priority process needs the CPU.
Killing the Process
After the process finishes its tasks, the operating system ends it and removes its
Process Control Block (PCB).
Context Switching of Process
The process of saving the context of one process and loading the context of
another process is known as Context Switching. In simple terms, it is like loading
and unloading the process from the running state to the ready state.
When Does Context Switching Happen?
Context Switching Happen:
 When a high-priority process comes to a ready state (i.e. with higher
priority than the running process).
 An Interrupt occurs.
 User and kernel-mode switch (It is not necessary though)
 Preemptive CPU scheduling is used.

Context Switch vs Mode Switch

A mode switch occurs when the CPU privilege level is changed, for example when
a system call is made or a fault occurs. The kernel works in more a privileged
mode than a standard user task. If a user process wants to access things that are
only accessible to the kernel, a mode switch must occur. The currently executing
process need not be changed during a mode switch. A mode switch typically
occurs for a process context switch to occur. Only the kernel can cause a context
switch.

Process Scheduling
Uniprocess scheduling
Uniprocessor scheduling in an operating system is the process of deciding which
process to run next on a single CPU, aiming to maximize resource utilization,
minimize response time, and ensure fairness by switching between
multiprogrammed processes that are waiting for I/O or other events. This involves
long-term, medium-term, and short-term scheduling levels, with short-term
scheduling using algorithms like First-Come-First-Served (FCFS), Round Robin,
Shortest Job Next (SJN), and priority-based scheduling to manage the execution
sequence

Fig: Uniprocess scheduling


Goals of Uniprocessor Scheduling
The primary objectives of uniprocessor scheduling are:

Maximizing CPU utilization: Keeping the CPU busy as much as possible.

High throughput: Completing a large number of processes in a given amount of


time.
Minimizing response time: Providing users with quick responses to their
requests in interactive systems.
Fairness: Ensuring that each process gets a fair share of CPU time and preventing
starvation

Types of Scheduling
Uniprocessor scheduling is often broken down into three levels:

1. Long-Term Scheduling: Decides which new processes will be admitted into the
ready queue to be considered for execution.

2. Medium-Term Scheduling: Involved in the swapping process, moving


processes between main memory and the secondary storage to manage the
degree of multiprogramming.
3. Short-Term Scheduling: The dispatcher, which is invoked very frequently (e.g.,
on a clock interrupt), selects a process from the ready queue to be executed on
the CPU

Short-Term Scheduling Algorithms


Common short-term scheduling algorithms include:

First-Come-First-Served (FCFS): A non-preemptive algorithm where processes are


executed in the order they arrive in the ready queue.
Round Robin: A preemptive algorithm where each process gets a fixed time slice
(quantum), and if it doesn't complete within that time, it's moved to the end of
the ready queue.
Shortest Job Next (SJN): Selects the process with the smallest estimated
execution time, which can improve average turnaround time but risks starving
long processes.
Priority-Based Scheduling: Processes are assigned priorities, and the scheduler
executes the highest-priority process. Lower-priority processes might suffer from
starvation unless a priority aging mechanism is used

Process Scheduling Algorithms

The operating system can use different scheduling algorithms to schedule


processes. Here are some commonly used timing algorithms:

 First-Come, First-Served (FCFS): This is the simplest scheduling algorithm,


where the process is executed on a first-come, first-served basis. FCFS is
non-preemptive, which means that once a process starts executing, it
continues until it is finished or waiting for I/O.

 Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects
the process with the shortest burst time. The burst time is the time a
process takes to complete its execution. SJF minimizes the average waiting
time of processes.

 Round Robin (RR): Round Robin is a proactive scheduling algorithm that


reserves a fixed amount of time in a round for each process. If a process
does not complete its execution within the specified time, it is blocked and
added to the end of the queue. RR ensures fair distribution of CPU time to
all processes and avoids starvation.

 Priority Scheduling: This scheduling algorithm assigns priority to each


process and the process with the highest priority is executed first. Priority
can be set based on process type, importance, or resource requirements.

 Multilevel Queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes
are queued based on their priority, and each queue uses its own scheduling
algorithm. This scheduling algorithm is useful in scenarios where different
types of processes have different priorities.

Advantages of Process Management


 Running Multiple Programs: Process management lets you run multiple
applications at the same time, for example, listen to music while browsing
the web.

 Process Isolation: It ensures that different programs don't interfere with


each other, so a problem in one program won't crash another.

 Fair Resource Use: It makes sure resources like CPU time and memory are
shared fairly among programs, so even lower-priority programs get a
chance to run.

 Smooth Switching: It efficiently handles switching between programs,


saving and loading their states quickly to keep the system responsive and
minimize delays.

Disadvantages of Process Management

 Overhead: Process management uses system resources because the OS


needs to keep track of various data structures and scheduling queues. This
requires CPU time and memory, which can affect the system's
performance.

 Complexity: Designing and maintaining an OS is complicated due to the


need for complex scheduling algorithms and resource allocation methods.

 Deadlocks: To keep processes running smoothly together, the OS uses


mechanisms like semaphores and mutex locks. However, these can lead to
deadlocks, where processes get stuck waiting for each other indefinitely.

 Increased Context Switching: In multitasking systems, the OS frequently


switches between processes. Storing and loading the state of each process
(context switching) takes time and computing power, which can slow down
the system.

You might also like