Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
18 views7 pages

HPC Notes Unit 3

The document outlines various decomposition techniques in parallel computing, including domain, data, functional, and recursive decomposition. It also discusses mapping, load balancing strategies, performance measures, and characteristics of parallel algorithms. Additionally, it explains algorithm design approaches such as divide and conquer, greedy methods, and dynamic programming.

Uploaded by

Sahil Sonawane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views7 pages

HPC Notes Unit 3

The document outlines various decomposition techniques in parallel computing, including domain, data, functional, and recursive decomposition. It also discusses mapping, load balancing strategies, performance measures, and characteristics of parallel algorithms. Additionally, it explains algorithm design approaches such as divide and conquer, greedy methods, and dynamic programming.

Uploaded by

Sahil Sonawane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

HPC NOTES

Unit 3 – Parallel Algorithm Design


1. Explain different decomposition techniques.

(May 2023)
Answer:
Decomposition is the process of breaking a computational problem into smaller sub-problems that
can be solved concurrently.
There are several decomposition techniques used in parallel computing:

1. Domain Decomposition:

o In domain decomposition, the data on which a problem operates is divided into


smaller domains, and then each domain is assigned to a different processor.

o Each processor works on its domain, and communication may be required for data at
the boundaries.

o This method is suitable for numerical methods, matrix operations, etc.

2. Data Decomposition:

o This technique focuses on partitioning the data across different processors.

o Each processor performs the same operation on different pieces of data.

o Suitable for problems like matrix multiplication, image processing, etc.

3. Functional Decomposition:

o Here, the functions or tasks are divided among processors.

o Different tasks or functions are executed on different processors.

o Suitable for problems with multiple independent operations.

4. Recursive Decomposition:

o In this technique, the problem is broken into smaller sub-problems recursively until
the sub-problems are simple enough to be solved directly.

o Suitable for divide and conquer algorithms like quicksort, mergesort.

2. Write short note on mapping and load balancing.

(May 2023)
Answer:
Mapping:

• Mapping is the process of assigning tasks to the available processors.


• A good mapping should reduce inter-processor communication and maximize processor
utilization.

Load Balancing:

• Load balancing ensures that all processors perform equal amounts of work.

• It avoids some processors sitting idle while others are overloaded.

• Load balancing can be:

1. Static Load Balancing: Tasks are assigned before execution begins.

2. Dynamic Load Balancing: Tasks are assigned during execution based on workload.

3. Explain various approaches for algorithm design.

(May 2023)
Answer:
Various algorithm design approaches used in parallel computing are:

1. Divide and Conquer:

o The problem is divided into sub-problems, solved recursively, and combined to get
the result.

o Example: Merge sort, quick sort.

2. Greedy Method:

o It makes a sequence of choices, each of which looks best at the moment.

o Example: Prim’s algorithm, Kruskal’s algorithm.

3. Dynamic Programming:

o Used for optimization problems.

o Breaks the problem into overlapping subproblems and solves each subproblem only
once.

o Example: Floyd Warshall algorithm.

4. Backtracking:

o Involves exploring all possible solutions by trying partial solutions and then
abandoning them if they are not suitable.

o Example: N-Queens problem.

5. Branch and Bound:

o Similar to backtracking but uses bounding functions to eliminate certain branches.

o Example: Travelling Salesman Problem.


4. Explain Divide and Conquer method with example.

(Dec 2021)
Answer:
Divide and Conquer:

• It is a method where the problem is divided into sub-problems, which are solved recursively,
and the solutions of sub-problems are combined to get the final result.

Steps:

1. Divide: The problem is divided into smaller sub-problems.

2. Conquer: Solve the sub-problems recursively.

3. Combine: Combine the solutions of sub-problems to get the final result.

Example: Merge Sort Algorithm

• Divide: The array is divided into two halves.

• Conquer: Sort both halves recursively.

• Combine: Merge the sorted halves.

5. Write short note on Performance Measures.

(Dec 2021)
Answer:
Performance measures in parallel computing are used to evaluate the efficiency and effectiveness of
parallel algorithms.

1. Speedup (S):

o It is the ratio of time taken to solve a problem on a single processor to the time taken
on multiple processors.

o S=T1TpS = \frac{T_1}{T_p}S=TpT1

2. Efficiency (E):

o It is the ratio of speedup to the number of processors.

o E=SPE = \frac{S}{P}E=PS

3. Scalability:

o It measures how well an algorithm or system performs with increasing processors.

4. Cost:

o It is the product of time and number of processors.

o Cost=P×TpCost = P \times T_pCost=P×Tp


6. What is Degree of Concurrency?

(May 2022)
Answer:

• The degree of concurrency is defined as the number of tasks that can be executed in parallel
at a particular time.

• It gives an idea of the inherent parallelism of the algorithm.

• A high degree of concurrency means more opportunities for parallel execution.

• It is determined by the maximum number of tasks that are not dependent on each other and
can run concurrently.

7. Explain loop level parallelism.

(May 2022)
Answer:

• Loop level parallelism is one of the simplest forms of parallelism.

• It involves executing different iterations of a loop simultaneously on different processors.

Example:

CopyEdit

for (int i = 0; i < n; i++)

a[i] = b[i] + c[i];

• This loop can be executed in parallel by assigning different iterations to different processors.

• It is suitable for data parallel applications.

8. What is Recursive Decomposition?

(Dec 2020)
Answer:

• Recursive decomposition involves dividing a problem into smaller sub-problems recursively.

• Each sub-problem is solved independently and then combined.

• This technique is suitable for divide and conquer algorithms.

Example:

• Quicksort: The array is partitioned into sub-arrays recursively.


• Merge sort: The array is divided into two halves recursively until a single element is left.

9. Differentiate between Data and Functional decomposition.

(Dec 2020)
Answer:

Aspect Data Decomposition Functional Decomposition

Definition Partitioning data among processors Partitioning tasks among processors

Function Same function applied on all data Different functions/tasks on same data

Example Matrix multiplication Image processing with different filters

Communication Need May require minimal communication May require synchronization

10. Write short note on Load Balancing Strategies.

(May 2022)
Answer:
Load balancing is used to distribute the workload evenly among processors to improve performance.

Types of Load Balancing Strategies:

1. Static Load Balancing:

o Tasks are distributed at compile time.

o No run-time balancing is required.

2. Dynamic Load Balancing:

o Tasks are distributed at run-time.

o Suitable for irregular problems.

3. Work Stealing:

o Idle processors steal work from busy processors.

4. Master-Slave Strategy:

o Master assigns tasks dynamically to slave processors.

11. Write short note on Granularity.

(Dec 2020)
Answer:

• Granularity refers to the amount of computation in a task compared to the communication


overhead.
Types:

1. Fine-grain: Small computation per communication.

2. Coarse-grain: Large computation per communication.

3. Medium-grain: Balanced.

• Finer granularity allows better load balancing but increases communication.

• Coarser granularity reduces communication but may cause imbalance.

12. Explain Task Mapping.

(May 2023)
Answer:

• Task mapping refers to the process of assigning tasks to processors.

• A good task mapping reduces inter-processor communication and maximizes parallelism.

Types:

1. Static Mapping: Fixed at compile time.

2. Dynamic Mapping: Done during execution.

3. One-to-one mapping: Each processor gets one task.

4. One-to-many mapping: One task to multiple processors or vice versa.

13. What is Scalability in parallel computing?

(May 2022)
Answer:

• Scalability refers to the ability of a system to achieve higher performance when resources are
increased.

• It shows how efficiently a parallel system utilizes increasing number of processors.

Types:

1. Strong Scalability: Speedup with fixed problem size.

2. Weak Scalability: Speedup with increasing problem size.

14. State and explain characteristics of Parallel Algorithms.

(May 2023)
Answer:
Characteristics of Parallel Algorithms:

1. Concurrency: Tasks run simultaneously.


2. Communication: Data exchange between tasks.

3. Synchronization: Coordination between tasks.

4. Scalability: Handles increasing processors.

5. Load Balancing: Uniform distribution of work.

6. Cost-optimality: Time × processors should be minimal.

15. What is Parallel Algorithm? State steps in designing parallel algorithm.

(Dec 2021)
Answer:
Parallel Algorithm:

• A parallel algorithm is one that uses multiple processors to solve a problem simultaneously.

• It reduces execution time by using parallelism.

Steps in Designing Parallel Algorithm:

1. Partitioning: Divide the problem into sub-tasks.

2. Communication: Identify data dependencies and communication.

3. Agglomeration: Combine small tasks to reduce overhead.

4. Mapping: Assign tasks to processors efficiently.

You might also like