Chapter 03 : Processes
Actions taken by kernel during a context switch
When the CPU switches from one process to another, the kernel performs the following steps:
1. Trap to kernel mode
o Triggered by a timer interrupt, system call, or I/O interrupt.
2. Save context of the current (running) process
o CPU registers, program counter (PC), stack pointer (SP), and processor status
word are saved into the Process Control Block (PCB) of the current process.
3. Update process state
o The PCB of the old process is updated (e.g., Running → Ready/Waiting) and the
process is placed into the appropriate queue.
4. Select next process
o The CPU scheduler chooses the next process to execute from the ready queue
according to the scheduling algorithm.
5. Load context of the new process
o The saved CPU state of the selected process is restored from its PCB (registers,
PC, SP, etc. are reloaded).
o If necessary, memory management info (page tables, base/limit registers) is also
reloaded.
6. Resume execution
o The CPU switches back to user mode and continues execution of the new process
from where it was last stopped.
📌 Explanation of States
1. New
o Process is being created.
o Transition: Admitted by OS → moves to Ready state.
2. Ready
o Process is in memory, waiting for CPU. The process is waiting to be assigned to
a processor
o Transition: Scheduler dispatches it → moves to Running.
3. Running
o Process is currently executing on the CPU.
o Transitions:
Interrupt( time slice ended , time quantum ended)→ back to Ready.
I/O request → goes to Waiting.
Process Exited → moves to Terminated.
4. Waiting (Blocked)
o Process cannot continue until an event occurs (e.g., I/O completion).
o Transition: Event completes → moves back to Ready.
5. Terminated (Exit)
Process has finished execution and is removed from the system.
📌 Significance of Transitions
New → Ready: Process admitted by OS.
Ready → Running: CPU scheduler selects the process which process will be executed
now .
Running → Ready: Time quantum expired or higher priority process arrived.it is called
interrupt.
Running → Waiting: Process requests I/O or waits for an event.
Waiting → Ready: Event (I/O) completes.
Running → Terminated: Process completes or is done.
📌 Process Creation & Termination
1. Process Creation
Parent process creates child processes, forming a process tree.
Each process is uniquely identified and managed by a Process Identifier (PID).
🔹 Resource Sharing Options
Parent and child share all resources.
Child shares subset of parent’s resources.
Parent and child share no resources.
🔹 Execution Options
Parent and child execute concurrently.
Parent waits until child terminates.
🔹 Address Space
Child is a duplicate of parent (inherited memory space).
Alternatively, child may have a program loaded into it.
🔹 UNIX Example
fork() → creates a new child process.
exec() → replaces the process’s memory with a new program.
2. Process Termination
Process executes its last statement and calls exit().
Status/data is returned to the parent process via wait().
OS deallocates resources.
🔹 Parent Can Terminate Child (using abort()) if:
Child exceeded allocated resources.
Task assigned to child is no longer required.
Parent itself is exiting (and OS does not allow orphan processes).
✅ This covers everything: creation → resource sharing → execution →
termination → UNIX examples.
📌 Inter-Process Communication (IPC) Mechanisms
IPC allows processes to communicate and synchronize with each other. The two main IPC
mechanisms are:
1. Message Passing
2. Shared Memory
🔹 1. Message Passing
Processes communicate by sending and receiving messages through the kernel.
No direct sharing of memory.
✅ Advantages:
Simple to implement and easy to use.
Provides synchronization automatically.
Useful for communication in distributed systems.
❌ Disadvantages:
Slower due to system call and kernel involvement.
Message size may be limited.
Higher overhead for frequent communication.
🔹 2. Shared Memory
A region of memory is shared between processes.
Processes read/write directly to this region without kernel involvement after setup.
✅ Advantages:
Very fast (direct access to memory, no kernel overhead).
Suitable for large data transfers.
More efficient for frequent communication.
❌ Disadvantages:
Requires explicit synchronization (semaphores, mutexes).
More complex to program.
Risk of race conditions and data inconsistency if not managed properly.
📌 Comparison Table
Aspect Message Passing Shared Memory
Via kernel (send/receive system
Communication Direct read/write to common memory
calls)
Speed Slower (kernel overhead) Faster (direct access)
Must be handled explicitly (semaphores,
Synchronization Built-in (send/receive are blocking)
mutexes)
Complexity Simple to use More complex (requires synchronization)
Use in Distributed Suitable (works across machines) Not possible (limited to one machine)
📌 Scenarios
Message Passing preferred when:
o Communication is infrequent or involves small messages.
o Processes are on different machines (distributed systems).
Shared Memory preferred when:
o Large volumes of data need to be exchanged.
o Processes are on the same machine and speed is critical.