OS Notes Final
OS Notes Final
Process scheduling is the activity performed by the operating system to decide the order in which
processes in the ready queue are given access to the CPU. Since multiple processes may be ready
to run at the same time, the OS must schedule them efficiently to improve performance and
responsiveness.
Explanation:
This ensures efficient use of the CPU and fair treatment of processes.
Main Goals of Process Scheduling:
The Long-Term Scheduler, also known as the Job Scheduler, is a critical component of an
operating system responsible for controlling the admission of jobs into the system. It determines
which programs are admitted into the system for processing and when. The long-term scheduler
manages the job queue and selects processes from the pool of submitted jobs and loads them into
the ready queue in main memory. Its main goal is to maintain a balanced mix of I/O-bound and
CPU-bound processes to ensure efficient use of system resources. Unlike the short-term scheduler
that operates frequently, the long-term scheduler executes less often, typically when a job
completes or when system load changes. In systems with heavy load or high multitasking needs, a
well-optimized long-term scheduler is essential to avoid resource starvation or overloading the
CPU.
+-----------------+
| Job Pool (Disk)|
+-----------------+
|
| [Long-Term Scheduler]
v
+-----------------+
| Ready Queue | <--+
+-----------------+ |
| |
| [Short-Term Scheduler]
v |
+-----------------+ |
| CPU |----+
+-----------------+
|
| [Process Completion]
v
+-----------------+
| Terminated Jobs |
+-----------------+
The Short-Term Scheduler, also known as the CPU scheduler, is a key component of an operating
system's process management. It is responsible for selecting one of the ready-to-execute processes
and allocating the CPU to it. This scheduler operates frequently and makes decisions every time
the CPU becomes idle or a process completes. It must be fast and efficient since its decisions
directly affect system responsiveness and CPU utilization. The short-term scheduler works with
the ready queue, which holds all the processes that are ready and waiting to run on the CPU. Unlike
long-term and medium-term schedulers, which run less frequently, the short-term scheduler runs
multiple times per second.
• Example: Deciding which process in the ready queue will execute on the CPU next.
The Medium-Term Scheduler is an operating system component that handles process swapping
— moving processes between main memory (RAM) and secondary storage (disk). It is primarily
used to improve multiprogramming and manage system load. When the memory is full or the
system is overloaded, the medium-term scheduler suspends some processes (moves them from
RAM to disk), freeing up memory for others. Later, it may resume those suspended processes
when resources are available. This form of scheduling is essential in time-sharing systems where
maintaining a balance between active and suspended processes improves performance and
responsiveness.
Categories of Scheduling:
Definition:
FCFS is the simplest CPU scheduling algorithm. In FCFS, the process that arrives first in the
ready queue is scheduled first for execution, just like standing in a line at a shop: whoever comes
first gets served first.
Characteristics:
• Type: Non-preemptive
• Scheduling Order: Based on arrival time
• Fairness: All processes are treated equally, but it may not always be efficient.
How It Works:
Processes are handled in the exact order they arrive, without interruption. Once a process starts
execution, it runs until completion.
Example:
Gantt Chart:
Time → 0 4 7 12
| P1 | P2 | P3 |
Calculation Table:
Process Arrival Burst Start Completion Turnaround Time Waiting Time
Time Time Time Time (CT - AT) (TAT - BT)
P1 0 4 0 4 4 0
P2 1 3 4 7 6 3
P3 2 5 7 12 10 5
Advantages of FCFS:
Disadvantages:
Types of SJF:
1. Non-Preemptive SJF
o Once a process starts executing, it runs till completion.
o Suitable for batch systems.
2. Preemptive SJF (also called Shortest Remaining Time First - SRTF)
o If a new process arrives with shorter burst time than the current one, it preempts
(interrupts) the current process.
o Better for interactive systems.
Execution Order:
Gantt Chart:
Time → 0 8 9 11 15
| P1 | P4 | P3 | P2 |
Final Table:
Advantages of SJF:
• Minimizes average waiting time — best among all algorithms in this regard.
• Efficient for batch processing.
Disadvantages:
Priority Scheduling
Definition:
Priority Scheduling is a CPU scheduling algorithm where each process is assigned a priority,
and the CPU is allocated to the process with the highest priority (usually, lower number = higher
priority).
If two processes have the same priority, they are scheduled according to FCFS (First-Come, First-
Served).
Types of Priority Scheduling:
1. Preemptive:
o A newly arrived process with higher priority can interrupt the currently running
process.
2. Non-Preemptive:
o The CPU remains with the current process until it finishes, even if a higher
priority process arrives.
Example (Non-Preemptive):
Execution Order:
Gantt Chart:
Time → 0 4 7 9
| P1 | P2 | P3 |
Final Table:
P2 1 3 1 7 6 3
P3 2 2 3 9 7 5
Advantages:
Disadvantages:
Key Features:
• Type: Preemptive
• Time Quantum: Small fixed CPU time slice (e.g., 2ms, 4ms, etc.)
• Goal: Fairness and responsiveness
• Used in: Time-sharing and interactive systems
How It Works:
Example:
Let’s take:
Gantt Chart:
Time → 0 2 4 5 7 8
|P1 |P2 |P3 |P1 |P1|
Explanation:
Final Table:
Process Arrival Burst Completion Turnaround (CT - AT) Waiting (TAT - BT)
P1 0 5 8 8 3
P2 1 3 7 6 3
P3 2 1 5 3 2
Advantages:
• Fair to all processes — no starvation
• Good for time-sharing systems
• Responsive to interactive users
Disadvantages:
How It Works:
1. Each process is permanently assigned to one queue based on its characteristics (e.g.,
priority, memory usage, etc.).
2. Each queue may have its own scheduling policy:
o Example:
▪ Interactive queue → Round Robin
▪ Batch queue → FCFS
3. CPU scheduling happens between queues, usually using priority (higher-priority queue
always gets CPU first).
4. No movement of processes between queues (unlike multilevel feedback queue).
Example:
Queue Setup:
Processes:
Scheduling Order:
CPU always checks Q1 first, then Q2, then Q3.
Explanation:
Final Table:
Process Arrival Burst Queue Type Completion Turnaround Time Waiting Time
Time Time Time (CT - AT) (TAT - BT)
P1 0 4 Q1 (System) 4 4 0
P2 1 6 Q2 14 13 7
(Interactive)
P4 3 5 Q2 15 12 7
(Interactive)
P3 2 3 Q3 (Batch) 18 16 13
P5 4 4 Q3 (Batch) 22 18 14
Advantages:
Disadvantages:
Key Characteristics:
• Multiple Queues: There are several queues, each with its own priority and scheduling
algorithm (e.g., Round Robin or FCFS).
• Feedback Mechanism: Processes that use too much CPU time are moved to lower-
priority queues. Processes that use less CPU time or are interactive may be moved to
higher-priority queues.
• Aging: Processes that wait too long in lower-priority queues may be moved to higher-
priority queues to prevent starvation.
How It Works:
1. Multiple Queues:
o Each queue has a different priority and scheduling algorithm. For example:
▪ Queue 1: Highest priority (uses Round Robin or FCFS)
▪ Queue 2: Medium priority (uses Round Robin)
▪ Queue 3: Lowest priority (uses FCFS)
2. Process Movement:
o When a process first enters the system, it starts in the highest priority queue.
o If a process exceeds its time quantum in a given queue, it is moved to a lower-
priority queue (i.e., gets punished for being CPU-intensive).
o If a process waits too long in a low-priority queue, it can be promoted to a higher-
priority queue to avoid starvation (this is aging).
3. Scheduling Behavior:
o Higher-priority queues are served first, and processes in these queues are given
shorter time quanta.
o Lower-priority queues are served only when higher-priority queues are empty,
and processes in these queues receive longer time quanta.
Example:
Process Information:
Scheduling Behavior:
1. Initial State:
o All processes are placed in Q1 (highest priority).
2. Execution of Processes:
o P1 starts at time 0 in Q1 → runs for 2 ms (remaining 3 ms) → moves to Q2 (since
it exceeded time quantum).
o P2 starts at time 2 in Q1 → runs for 2 ms (remaining 1 ms) → moves to Q2.
o P3 starts at time 4 in Q1 → runs for 2 ms (remaining 4 ms) → moves to Q2.
o P4 starts at time 6 in Q1 → runs for 2 ms (remaining 6 ms) → moves to Q2.
3. Move to Q2 (4 ms time quantum):
o P1 runs for 4 ms in Q2 and finishes (total 6 ms).
o P2 runs for 1 ms in Q2 and finishes (total 3 ms).
o P3 runs for 4 ms in Q2 → remaining 2 ms → moves to Q3.
o P4 runs for 4 ms in Q2 → remaining 2 ms → moves to Q3.
4. Execution in Q3 (FCFS, no preemption):
o P3 finishes its remaining 2 ms.
o P4 finishes its remaining 2 ms.
P1 0 5 Q1 → Q2 6 6 1
P2 1 3 Q1 → Q2 8 7 4
P3 2 6 Q1 → Q2 16 14 8
→ Q3
P4 3 8 Q1 → Q2 18 15 7
→ Q3
Advantages of MLFQ:
1. Adaptability: MLFQ dynamically adjusts based on the behavior of processes. CPU-bound
processes are penalized, and interactive processes are favored.
2. Prevents Starvation: Thanks to the aging mechanism, processes that are waiting too long
in low-priority queues are promoted, preventing starvation.
3. Fairness: Balances CPU time distribution among different types of processes.
Disadvantages of MLFQ:
1. Complexity: MLFQ is more complex than simpler algorithms like FCFS or Round Robin.
Managing multiple queues and deciding when to promote/demote processes requires
additional logic.
2. Difficulty in Setting Parameters: The parameters (like time quanta and aging) need to be
carefully tuned. If the quantum is too small, too many context switches happen; if too large,
it behaves like FCFS.
3. Overhead: The system needs to monitor process behavior and move processes between
queues, which introduces additional overhead.
Critical Section
Definition:
A Critical Section refers to a part of a program (typically a piece of code) where shared
resources (such as variables, memory, or hardware) are accessed and modified. Because multiple
processes or threads can access these resources simultaneously, there’s a risk of conflicts or data
corruption.
The critical section problem arises when multiple processes or threads try to access and
manipulate shared resources concurrently, and the solution is ensuring that only one process or
thread is allowed to access the critical section at a time.
• Deadlock: When two or more threads or processes wait for each other to release a critical
section, it can result in a deadlock situation in which none of the threads or processes can
move. Deadlocks can be difficult to detect and resolve, and they can have a significant impact
on a program’s performance and reliability.
• Starvation: When a thread or process is repeatedly prevented from entering a critical
section, it can result in starvation, in which the thread or process is unable to progress. This
can happen if the critical section is held for an unusually long period of time, or if a high-
priority thread or process is always given priority when entering the critical section.
• Overhead: When using critical sections, threads or processes must acquire and release
locks or semaphores, which can take time and resources. This may reduce the program’s
overall performance.
Hardware-Based Solutions
These solutions rely on special CPU instructions that are executed atomically (meaning they
cannot be interrupted). These instructions help prevent multiple processes from entering the critical
section at the same time.
• The CPU provides support to ensure that only one process can modify a shared variable or
resource at a time.
• These solutions are fast and efficient at the hardware level.
• However, they often involve busy waiting, where a process continuously checks if it can
enter the critical section, which wastes CPU time.
• They are also hardware-dependent, meaning their availability and behavior vary with
different systems.
1. Test-and-Set
This type of solution checks a value and changes it in one atomic step. It is often used to create
simple locks that control access to the critical section.
2. Compare-and-Swap
This solution compares the current value of a memory location with an expected value. If they
match, it replaces the value with a new one, all in one atomic operation.
• Ensures that only one process can succeed in accessing or modifying a resource.
• Helps in building more advanced synchronization tools like lock-free data structures.
3. Exchange Instruction
This operation swaps the contents of a register with a memory location atomically. It is used to
enforce mutual exclusion by controlling ownership of a resource.
Software-Based Solutions
These solutions are implemented using code and logical techniques without needing special
hardware instructions.
• They use flags, variables, and structured logic to coordinate which process can enter the
critical section.
• These solutions are more portable and flexible since they can work across different
hardware platforms.
• Many of them avoid busy waiting and are better suited for more complex or multi-process
environments.
• However, they can be harder to design correctly and may require additional OS support.
1. Mutex Locks
• A mutex (mutual exclusion) is a lock that only allows one process or thread to access the
critical section at a time.
• Before entering, a process must acquire the lock, and after exiting, it releases the lock.
• If another process has the lock, the requesting process must wait.
2. Semaphores
3. Peterson’s Algorithm
4. Bakery Algorithm
5. Monitors
Deadlock
Deadlock is a state in a computer system where two or more processes are unable to proceed
because each is waiting for a resource that is being held by another process, and none of them
can release their currently held resources.
In simple terms, it’s a situation where processes are stuck waiting on each other forever, and
no process can complete its task.
Example in Concept:
1. Mutual Exclusion
• This means resources cannot be shared; only one process can use a resource at any given
time.
• For example, a printer or a file cannot be accessed by two processes at the same time —
one must wait for the other to finish.
• If mutual exclusion is not required, then multiple processes could use the resource
together, and deadlock would not happen.
2. Hold and Wait
• A process is holding at least one resource and waiting for another that is currently held
by a different process.
• This creates a situation where each process is holding something and waiting for
something else — a key step in the buildup to deadlock.
• If processes were required to request all needed resources at once, or release all resources
before requesting new ones, this condition would not occur.
3. No Preemption
• Resources cannot be forcibly removed from a process once they have been allocated.
• The process must release the resource voluntarily after completing its task.
• Without preemption, a process holding a resource might never give it up, preventing others
from using it — contributing to a deadlock.
• If preemption were allowed, the system could interrupt and take back a resource to resolve
the deadlock.
4. Circular Wait
• A circular chain of processes exists, where each process is waiting for a resource that is
held by the next process in the chain.
• For example:
o Process A is waiting for a resource held by Process B
o Process B is waiting for a resource held by Process C
o Process C is waiting for a resource held by Process A
→ This forms a loop where none of the processes can proceed.
• If the system prevents circular waiting by enforcing a strict resource request order, then
this condition can be avoided.
Deadlock Detection
Deadlock detection is a process in operating systems that checks whether a deadlock has
occurred. Unlike prevention or avoidance methods, which attempt to avoid deadlocks from
happening, detection assumes deadlocks will occur occasionally and focuses on identifying and
resolving them when they do.
How Deadlock Detection Works
The goal of deadlock detection is to recognize when a set of processes is in a deadlock state and
then take action to recover from it. The operating system uses algorithms to track the state of the
system, the resources being used, and the resources being requested by each process. If a deadlock
is detected, the OS can terminate processes or force resource preemption to resolve the
situation.
1. Process Termination
In this technique, deadlock is resolved by killing one or more processes involved in the deadlock.
Once a process is terminated, the resources it was holding are released, allowing other processes
to proceed.
• Abort All Processes: All processes involved in the deadlock are terminated.
o This is simple but may lead to the loss of a lot of work and data.
• Abort Processes One at a Time: Processes are terminated one by one until the deadlock
is resolved.
o This method tries to minimize the loss of resources or data by selectively
terminating processes.
2. Resource Preemption
In this method, the system forcefully takes resources away from processes to break the deadlock.
The resources are then allocated to the processes that need them to proceed.
3. Rollback
This method involves rolling back processes to a safe state and then restarting them, usually from
a checkpoint where they were not part of the deadlock.
• The system saves the state of processes periodically in the form of checkpoints.
• If a deadlock is detected, processes involved in the deadlock are rolled back to the last
checkpoint before the deadlock occurred.
• Once rolled back, these processes can restart and attempt to execute again, possibly in a
different order that avoids the deadlock.
Another method for recovering from a deadlock is to suspend processes that are part of the
deadlock cycle. The processes can then be resumed once the deadlock is cleared.
• Processes involved in the deadlock are suspended, and their resources are released.
• After some time, when it’s safe to do so, the processes can be resumed.
• Mutual Exclusion means that a resource can only be held by one process at a time.
• In many systems, mutual exclusion is necessary for certain resources (e.g., printers, files),
but not all resources.
How to Prevent Deadlock by Eliminating Mutual Exclusion:
• Hold and Wait means a process is holding at least one resource while waiting to acquire
additional resources that are being held by other processes.
• Require processes to request all resources at once before execution. This is called the
All or Nothing Rule.
o If a process requests a resource, it must request all of the resources it will need
throughout its execution.
• Alternatively, processes can release resources they are holding before requesting
additional resources.
3. Eliminate No Preemption
• If a process is holding some resources and needs additional resources that are being held
by other processes, the system can forcefully take (or preempt) resources from processes.
o The resources can then be reassigned to other processes that need them.
• Circular Wait occurs when a set of processes are waiting for resources in a circular chain.
• Enforce an Ordering of Resources: This technique ensures that every process requests
resources in a predefined order. If all processes follow the same order when requesting
resources, then a circular wait cannot occur.
Difference between deadlock and starvation:
Aspect Deadlock Starvation
Definition A situation where two or more A situation where a process waits
processes are waiting indefinitely for indefinitely to acquire a resource
resources held by each other. because other higher-priority processes
keep getting it.
Cause Happens when all four conditions of Caused by unfair resource allocation or
deadlock are met: mutual exclusion, scheduling priorities.
hold and wait, no preemption, and
circular wait.
Processes Usually involves multiple processes in Often affects one or a few lower-
Involved a circular wait. priority processes.
System No process involved in the deadlock Some processes may continue, but one
Behavior can proceed. or more are permanently delayed.
Solution Requires deadlock detection, Resolved by ensuring fairness, aging
prevention, or recovery techniques. (gradually increasing priority), or round-
robin scheduling.
Resource Each process holds some resources and The starving process may not hold any
Held waits for others. resource; it just never gets a chance.
8. Queue A visible queue forms behind the No obvious queue; process just keeps
Behavior slow process. getting skipped.
9. Resolution Use smarter schedulers like Round Use aging to gradually increase the
Robin, SJF, or multilevel queues. priority of long-waiting processes.
10. Detection Easier to observe due to noticeable Harder to detect without monitoring or
performance lag. logging tools.
Memory Management
What is Memory Management?
Memory management is the process by which an operating system (OS) controls and
coordinates computer memory, assigning blocks to various running programs to optimize overall
system performance.
Example Analogy:
Think of memory as a hotel, and memory management as the reception desk:
How It Works:
Types:
• Fixed Partitioning:
o Memory is divided into equal-size parts.
• Variable Partitioning:
o Partitions are sized based on the process.
• Single Partition Allocation:
o Whole memory is allocated to one process.
o Simple but inefficient for multiprogramming.
Pros:
• Simple to implement.
• Fast access to memory.
Cons:
This means the memory assigned to a process can be scattered across various locations in RAM,
instead of being placed together in one block.
Paging
Memory is divided into small fixed-size blocks called pages (for processes) and frames (for
physical memory).
🔹How It Works:
🔹 Pros:
🔹 Cons:
Segmentation
Memory is divided into variable-size segments based on logical divisions like code, data, stack,
etc.
🔹 How It Works:
🔹 Pros:
🔹 Cons:
Virtual Memory
A memory management technique that uses a portion of the hard disk as an extension of RAM.
🔹 How It Works:
🔹 Pros:
🔹 Cons:
🔹 How It Works:
🔹 Pros:
🔹 Cons:
Buddy System
A method of dividing memory into blocks of size 2ⁿ and splitting or combining them as needed.
🔹 How It Works:
🔹 Pros:
🔹 Cons:
Memory Pooling
Pre-allocating a "pool" of memory blocks and reusing them for repetitive tasks.
🔹 How It Works:
🔹 Pros:
🔹 Used In:
🔹 How It Works:
🔹 Techniques:
• Reference Counting
• Mark and Sweep
• Generational Garbage Collection
🔹 Pros:
🔹 Cons: