Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views2 pages

Unit 2 Cheat Sheet

Unit 2 covers key concepts in Parallel and Distributed Computing, including data and control parallelism, performance metrics, and Amdahl's Law. It discusses communication overhead, SPMD vs MPMD, processor interconnections, and memory types. Real-life applications are highlighted, such as Netflix and Tesla's use of these computing principles.

Uploaded by

Tript sachdeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views2 pages

Unit 2 Cheat Sheet

Unit 2 covers key concepts in Parallel and Distributed Computing, including data and control parallelism, performance metrics, and Amdahl's Law. It discusses communication overhead, SPMD vs MPMD, processor interconnections, and memory types. Real-life applications are highlighted, such as Netflix and Tesla's use of these computing principles.

Uploaded by

Tript sachdeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Unit 2 - Parallel and Distributed Computing (Exam Cheat Sheet)

UNIT 2 - PARALLEL AND DISTRIBUTED COMPUTING (Quick Summary)

1. DATA PARALLELISM:

- Same task runs on different parts of data simultaneously.

- Examples: Image editing, Spark processing, ML training.

2. CONTROL PARALLELISM:

- Different tasks run simultaneously on the same/different data.

- Examples: Web server, Mobile apps, Gaming engines, Autonomous vehicles.

3. PERFORMANCE METRICS:

- Speedup = Serial Time / Parallel Time

- Efficiency = Speedup / Number of Processors

- Scalability: How well performance improves with more processors.

- Throughput: Tasks completed per second.

- Latency: Time taken to complete one task.

4. AMDAHL'S LAW:

- Formula: Speedup = 1 / ((1-P) + P/N)

- Even with many processors, sequential parts limit speed.

- Example: If 80% is parallel, speedup with 4 processors = 2.5x

5. COMMUNICATION OVERHEAD:

- Delays due to data exchange between processors.

- Caused by latency, bandwidth limits, waiting/sync delays.

- Reduced by: batching data, fast networks, overlapping work with data transfer.

6. SPMD VS MPMD:

- SPMD: One program, different data. (e.g., Weather modelling)

- MPMD: Different programs and data. (e.g., Car subsystems - GPS, camera)

7. PROCESSOR INTERCONNECTIONS:

Page 1
Unit 2 - Parallel and Distributed Computing (Exam Cheat Sheet)

Static (fixed): Mesh, Ring, Tree, Torus, Hypercube

Dynamic (flexible): Crossbar, Bus, MIN, Omega, Butterfly

- Used in AI, Cloud, Supercomputers

8. SHARED VS DISTRIBUTED MEMORY:

- Shared: One memory space, easy but limited scalability.

- Distributed: Each has own memory, faster, needs message passing.

9. LOAD BALANCING & TASK SCHEDULING:

- Static: Assigned before execution.

- Dynamic: Assigned during execution.

- Scheduling: OLB (basic), Self-scheduling (adaptive)

Real Life Use Cases:

- Netflix (Spark), Instagram (Image processing), Tesla (self-driving AI), AWS, Weather forecasting.

Page 2

You might also like