1.
What are SIMD systems, and how do they process multiple data elements
simultaneously? What are MIMD systems, and how do they handle multiple
independent tasks concurrently?
2. Discuss the role of interconnection networks in parallel computing. Why are they
critical to system performance?
3. Differentiate shared-memory systems and distributed-memory systems in terms of
their architecture and performance.
4. Explain the challenges in coordinating processes and threads in parallel programming.
5. What is Amdahl’s Law, and how does it apply to speedup in parallel systems?
6. Explain the concept of scalability in MIMD systems. What are the factors that affect
scalability?
7. How do you measure the speedup and efficiency of MIMD systems?
8. What is GPU performance, and how can it be measured in parallel computing tasks?
9. What is the Message Passing Interface (MPI)? Describe some of the commonly used
MPI functions and their roles in parallel programming.
10. How is the trapezoidal rule implemented using MPI in distributed memory systems?
11. Explain the challenges and techniques involved in handling I/O in distributed memory
systems using MPI.
12. What are the common performance bottlenecks in MPI programs, and how can they
be mitigated?
13. How do you handle synchronization and data consistency in distributed memory
systems using MPI?