Let's address each ques on in detail, categorized by units and topics, using the reference books
provided.
### UNIT 1: Flynn's and Handler's Classifica on of Parallel Compu ng Structures. Pipelined and Vector
Processors.
**1. Flynn's Classifica on of Parallel Processing**
Flynn's taxonomy is a classifica on for computer architectures based on the number of concurrent
instruc on (or control) streams and data streams available in the architecture:
- **Single Instruc on, Single Data (SISD):** A single processor executes a single instruc on stream to
operate on data stored in a single memory.
- **Single Instruc on, Mul ple Data (SIMD):** Mul ple processing elements perform the same
opera on on mul ple data points simultaneously.
- **Mul ple Instruc on, Single Data (MISD):** Mul ple processors execute different instruc ons on the
same data stream.
- **Mul ple Instruc on, Mul ple Data (MIMD):** Mul ple processors operate on different data
streams, execu ng different instruc ons independently.
**2. Handler's Classifica on of Parallel Compu ng Structures**
Handler's classifica on focuses on the concept of parallelism in compu ng systems:
- **Processor Array (PA):** Consists of an array of processing elements that operate under a single
instruc on stream.
- **Systolic Array (SA):** A network of processors that rhythmically compute and pass data through the
system.
- **Pipeline Processor (PP):** A form of instruc on-level parallelism where different stages of
instruc ons overlap in execu on.
**3. Pipelined Processors**
Pipelining is a technique where mul ple instruc on phases are overlapped. Each phase in a pipeline
completes a part of the instruc on. This technique increases instruc on throughput as different stages
work simultaneously:
- **Stages in a Pipeline:** Typically, the stages include fetch, decode, execute, memory access, and
write-back.
- **Performance Improvement:** Pipelining improves performance by increasing instruc on throughput
and allowing the CPU to handle mul ple instruc ons at different stages of comple on.
**4. Vector Processors**
Vector processors are designed to process data arrays (vectors) in a single instruc on cycle, which is
par cularly useful in applica ons like scien fic compu ng and graphics processing:
- **Applica on Areas:** Climate modeling, physics simula ons, computer graphics, and any applica on
requiring high-performance numerical computa ons.
### UNIT 2: Data and Control Hazards and Methods to Resolve Them. SIMD Mul processor Structures.
**1. Data Hazards**
Data hazards occur when instruc ons that exhibit data dependence modify data in different stages of a
pipeline. Types of data hazards include:
- **RAW (Read A er Write):** An instruc on needs to read a loca on that a previous instruc on writes
to.
- **WAR (Write A er Read):** An instruc on needs to write to a loca on that a previous instruc on
reads from.
- **WAW (Write A er Write):** Two instruc ons write to the same loca on.
**2. Control Hazards**
Control hazards, also known as branch hazards, occur when the pipeline makes wrong decisions on
branch predic on. This can lead to execu ng incorrect instruc ons.
**3. Methods to Resolve Hazards**
- **Stalling:** Inser ng no-opera on instruc ons (NOPs) to allow me for hazard resolu on.
- **Forwarding:** Passing the result of a computa on to a previous stage of the pipeline.
- **Branch Predic on:** Techniques to guess the direc on of branches to reduce control hazards.
**4. SIMD Mul processor Structures**
SIMD (Single Instruc on, Mul ple Data) structures are capable of execu ng the same opera on on
mul ple data points simultaneously. This is ideal for parallel data processing tasks, such as image
processing and scien fic computa ons.
### UNIT 3: Interconnec on Networks, Parallel Algorithms for Array Processors, Search Algorithms,
MIMD Mul processor Systems.
**1. Interconnec on Networks**
Interconnec on networks connect processors and memory modules in mul processor systems. Common
types include:
- **Bus-based Networks:** Shared communica on lines.
- **Crossbar Switches:** Directly connects mul ple inputs to mul ple outputs.
- **Mul stage Networks:** Combines switches in stages to form complex interconnec ons.
**2. Parallel Algorithms for Array Processors**
Parallel algorithms divide tasks into sub-tasks that can be executed simultaneously on array processors,
op mizing performance for tasks like matrix mul plica on and fast Fourier transforms (FFT).
**3. Search Algorithms**
Parallel search algorithms distribute search opera ons across mul ple processors, reducing search me
significantly. Common methods include parallel depth-first and breadth-first searches.
**4. MIMD Mul processor Systems**
MIMD (Mul ple Instruc on, Mul ple Data) systems allow processors to execute different instruc ons on
different data independently. This is the most versa le and widely used architecture for complex
compu ng tasks.
### UNIT 4: Scheduling and Load Balancing in Mul processor Systems, Mul processing Control and
Algorithms.
**1. Scheduling in Mul processor Systems**
Effec ve scheduling in mul processor systems ensures that all processors are u lized efficiently.
Strategies include:
- **Sta c Scheduling:** Tasks are assigned to processors before execu on starts.
- **Dynamic Scheduling:** Tasks are assigned to processors on-the-fly during execu on.
**2. Load Balancing**
Load balancing distributes workloads evenly across processors to avoid overloading any single processor.
Techniques include:
- **Round-robin:** Evenly distributes tasks in a cyclic order.
- **Randomized:** Tasks are assigned randomly to processors.
- **Heuris c-based:** Uses specific criteria to assign tasks to processors dynamically.
**3. Mul processing Control and Algorithms**
Mul processing control involves coordina ng mul ple processors to work on a common task. Algorithms
for mul processing include:
- **Barrier Synchroniza on:** Ensures all processors reach a certain point before proceeding.
- **Mutexes and Semaphores:** Control access to shared resources to avoid conflicts.
By following these detailed explana ons and u lizing the references provided, students can gain a
comprehensive understanding of advanced computer architecture topics.