Multiple-Processor Scheduling
When multiple CPUs (processors) are available, CPU scheduling
becomes more complex because the operating system must decide how
to distribute processes across processors efficiently.
1. Types of Multiprocessing Architectures
A. Homogeneous Processors
All processors are identical, meaning any process can run on any
CPU.
Scheduling is flexible and can balance the load efficiently.
B. Asymmetric Multiprocessing (AMP)
Only one processor handles all system tasks, such as scheduling
and I/O management.
The remaining processors only execute user tasks without
accessing system data.
This avoids the need for complex data sharing but creates a
bottleneck at the master processor.
C. Symmetric Multiprocessing (SMP) (Most Common)
Each processor is self-scheduling, meaning no single CPU is in
control.
Processes are managed in either:
o A single ready queue shared by all processors.
o Separate ready queues for each processor.
2. Processor Affinity (Sticking to One Processor)
A process tends to stay on the same CPU where it started running
because frequently switching CPUs can cause cache misses (data
reloading issues). This behavior is called processor affinity.
Soft Affinity – The OS prefers to keep a process on the same CPU
but can move it if needed.
Hard Affinity – The process is strictly bound to a specific CPU and
cannot move.
3. Variations: Processor Sets
Some systems allow defining processor sets, where specific
processors handle only certain types of tasks (e.g., reserving a CPU
for real-time processes).
Key Takeaways
SMP (Symmetric Multiprocessing) is the most common
today.
Processor affinity helps reduce context-switching overhead.
Asymmetric multiprocessing (AMP) simplifies management
but can be a bottleneck
Load Balancing in Multiple-Processor Scheduling
In Symmetric Multiprocessing (SMP), all CPUs should be kept busy to
maximize efficiency. Load balancing ensures that work is evenly
distributed among processors.
Types of Load Balancing:
1. Push Migration
o A special task checks CPU loads at regular intervals.
o If one CPU is overloaded, it pushes some tasks to other CPUs.
2. Pull Migration
o If a CPU is idle, it pulls a waiting task from a busy CPU.
Real-Time CPU Scheduling (Simplified & Clear)
Real-time systems need tasks to run on time, with minimal delays.
1. Types of Real-Time Systems
🔹 Soft Real-Time: The system tries to complete tasks on time, but delays
are acceptable.
Example: Video streaming—if frames are delayed, quality drops,
but the system still works.
🔹 Hard Real-Time: Tasks must meet deadlines, or the system fails.
Example: A pacemaker—if it doesn’t send a pulse on time, it could
be life-threatening.
2. Key Latencies (Delays in Execution)
⏳ Interrupt Latency: Time from when an event happens (e.g., pressing
a key) to when the CPU starts handling it.
Example: You click a button on a remote → slight delay before the
TV reacts.
⚡ Dispatch Latency: Time for the CPU to switch from one process to
another.
Example: A background task is running → an urgent task arrives →
small delay before switching.
3. Dispatch Latency Breakdown (Why Delays Happen?)
🔸 Preemption Delay: The CPU may be busy with a low-priority task and
needs to pause it to run an urgent task.
🔸 Resource Blocking: A high-priority task might be waiting because a
low-priority task holds a needed resource (e.g., a printer).
satisfies mutual exclusion but doesn’t satisfy bounded waiting