Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
24 views10 pages

Pdclab 7

This is the lab report of parallel and distributed computing course

Uploaded by

Agha Ammar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views10 pages

Pdclab 7

This is the lab report of parallel and distributed computing course

Uploaded by

Agha Ammar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING

EXPERIMENT NO. 7

Lab Title: Open MP & relevant concept

Student Name: M.Bilal Ijaz, Agha Ammar Khan Reg. No:210316,210300

Objective: Implement and analyze various Open MP Programs.

LAB ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and follows the
lab safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Data Presentation

Experiment Results

Conclusion

Total Marks: Obtained Marks:

Date: 18/11/2024 Signature:


LAB#07
TITLE: Open MP & relevant concept

Objective:
Implement and analyze various Open MP Programs.

Introduction:
OpenMP (Open Multi-Processing) is a parallel programming model and API (Application
Programming Interface) designed to simplify the development of parallel applications in C, C++,
and Fortran. It allows programmers to write parallel code using compiler directives, runtime
libraries, and environment variables without needing to manage the low-level details of
parallelism (e.g., thread management and synchronization).
Here are the key points about OpenMP:
1. Parallel Programming:
• OpenMP provides a simple way to write multi-threaded programs. It allows you
to split a program’s execution across multiple processors or cores in a system.
• The primary goal of OpenMP is to make it easier for developers to leverage
multiple CPU cores for parallel execution of computationally intensive tasks.
2. Directives:
• OpenMP uses compiler directives (pragmas) to indicate which portions of the
code should be executed in parallel. These directives are inserted directly into
the code and tell the compiler how to split work between threads.
• Common OpenMP directives include:
o #pragma omp parallel: Tells the compiler to execute a block of code in
parallel.
o #pragma omp for: Specifies a loop to be executed in parallel.
o #pragma omp critical: Specifies a section of code that should only be
executed by one thread at a time to prevent race conditions.
o #pragma omp sections: Allows dividing work into different sections, with
each section executed by a different thread.
3. Shared and Private Variables:
• In OpenMP, variables can be shared (accessible by all threads) or private (each
thread gets its own copy). This distinction helps manage data consistency and
avoid race conditions.
• shared: Declares that a variable will be shared by all threads.
• private: Declares that each thread will have its own copy of the variable.
4. Runtime Library:
• OpenMP also provides runtime library functions to control parallel execution,
such as omp_get_num_threads() (to get the number of threads) and
omp_set_num_threads() (to set the number of threads).
5. Environment Variables:
• You can control certain OpenMP parameters through environment variables,
such as OMP_NUM_THREADS to set the number of threads used in parallel
regions.

Example:

• #pragma omp parallel for tells the compiler to parallelize the for loop.
• reduction(+:sum) ensures that the variable sum is safely updated by each thread
and then combines the results at the end.
Advantages of OpenMP:
• Ease of Use: OpenMP uses simple directives that don’t require a complete rewrite of
serial code.
• Portability: OpenMP code works on different platforms (e.g., multi-core CPUs) and is
supported by many compilers.
• Flexibility: It supports both shared-memory and distributed-memory architectures
(though it is most commonly used in shared-memory systems).

How OpenMP Works:


• When an OpenMP program runs, the operating system or the OpenMP runtime splits
the work among multiple threads. These threads execute in parallel, potentially
improving performance on multi-core systems.

Working with OpenMP:


• Compile with OpenMP Support: Make sure to compile your program with OpenMP
enabled by using the -fopenmp flag if you’re using GCC or Clang. For example:

• Control the Number of Threads: You can set the number of threads either
programmatically or through the OMP_NUM_THREADS environment variable.

Summary:
OpenMP is a popular, high-level API for parallel programming that makes it easier for developers to take
advantage of multiple processors or cores in shared-memory systems. Through simple compiler
directives, developers can parallelize their code without needing to manage threads directly.
Lab Tasks:

Task1:

Code and Output:


Task2:

Code and Output:


Task3:

Code and Output:


Task4:

Code and Output:


Conclusion:
The lab on OpenMP and Relevant Concepts provided a practical understanding of parallel
programming using the OpenMP API. By implementing and analyzing various OpenMP programs,
we explored key concepts such as thread creation, synchronization, and parallelization of loops.
Specific functionalities like omp_get_thread_num(), omp_set_num_threads(), for directives, and
the Through hands-on practice, we observed how OpenMP simplifies multi-threading by
abstracting the complexity of thread management while offering flexibility for performance
optimization.. Overall, the lab provided valuable insights into leveraging OpenMP for developing
efficient, scalable, and parallel applications.

You might also like