Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views9 pages

CH - 2 DPC

Uploaded by

tecnoj522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

CH - 2 DPC

Uploaded by

tecnoj522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CH – 2 : Message Passing Compu ng

What is Message Passing Compu ng. Explain Passing a message between processes using
send () and recv () library calls ?
Message Passing Compu ng is a parallel programming paradigm where processes
communicate by sending and receiving messages. In this model, processes running on
different processors (or nodes) exchange data and coordinate their ac ons through explicit
message passing opera ons. These opera ons typically involve two primary func ons:
sending a message (o en abbreviated as "send") and receiving a message (o en
abbreviated as "recv"). Here's an explana on of how messages are passed between
processes using the send() and recv() library calls:
1. Sending a Message (send()):
The send() func on is used to send a message from one process to another. It typically takes
the following parameters:
 Des na on Process: Specifies the iden fier or address of the des na on process to
which the message will be sent.
 Message Buffer: Contains the data to be sent. This can be a pointer to a buffer in
memory or an array of data elements.
 Message Size: Specifies the size of the message to be sent, usually in bytes.
 Message Tag: Op onal parameter used to label or iden fy the message. Tags can be
used by the receiver to differen ate between different types of messages.
2. Receiving a Message (recv()):
The recv() func on is used to receive a message from another process. It typically takes the
following parameters:
 Source Process: Specifies the iden fier or address of the source process from which
the message will be received. This parameter is o en set to a wildcard value to
receive messages from any source.
 Message Buffer: Points to a buffer in memory where the received message will be
stored.
 Message Size: Specifies the maximum size of the buffer, indica ng the maximum size
of the message that can be received without overflow.
 Message Tag: Op onal parameter used to filter messages based on their tags. If
specified, the recv() func on will only receive messages with matching tags.

Made By Rohit Makwana


Example Scenario:
Suppose we have two processes, Process A and Process B, running on different nodes in a
distributed system. Process A wants to send a message to Process B:
1. Process A Calls send():
 Process A calls the send() func on, providing the iden fier or address of
Process B as the des na on, a buffer containing the message data, the size of
the message, and an op onal tag if needed.
 The send() func on packages the message data and sends it over the network
to Process B.
2. Process B Calls recv():
 Process B calls the recv() func on, specifying Process A as the source (or using
a wildcard to receive from any source), a buffer to store the received message,
the maximum size of the buffer, and an op onal tag filter.
 The recv() func on receives the message sent by Process A, stores it in the
specified buffer, and makes it available for Process B to access and process.
3. Message Processing:
 Process B can now access the received message in its buffer, extract the data,
and perform any necessary processing based on the contents of the message.
 Once Process B has finished processing the message, it can con nue its
execu on or send a response message back to Process A if needed.
By using the send() and recv() library calls, processes can communicate effec vely in a
message-passing compu ng environment, enabling parallel execu on and coordina on
across distributed systems.
What is message passing rou ne.
A message passing rou ne refers to a subrou ne or func on provided by a programming
library or framework that facilitates communica on between processes in a parallel
compu ng environment. These rou nes are used to send and receive messages between
processes, enabling coordina on, data exchange, and synchroniza on in distributed or
parallel systems.
Message passing rou nes typically provide an interface for developers to perform
opera ons such as sending messages from one process to another, receiving messages, and
handling communica on-related tasks. These rou nes are o en part of message passing
libraries or middleware that abstract the underlying communica on protocols and provide a
standardized way for processes to interact.

Made By Rohit Makwana


Examples of message passing rou nes may include:
1. send(): Sends a message from the current process to a specified des na on process.
2. recv(): Receives a message from another process and stores it in a designated buffer.
3. broadcast(): Sends a message from one process to all other processes in the system.
4. sca er(): Distributes data elements from one process to mul ple processes in a
collec ve opera on.
5. gather(): Collects data elements from mul ple processes into a single process in a
collec ve opera on.
6. reduce(): Combines data elements from mul ple processes using a specified
opera on (e.g., sum, maximum, minimum) and stores the result in a designated
process.
These rou nes are essen al building blocks for developing parallel and distributed
applica ons, as they provide a means for processes to exchange informa on and
collaborate effec vely. By using message passing rou nes, developers can implement
parallel algorithms, distribute computa onal tasks across mul ple processes, and
orchestrate complex parallel workflows in distributed compu ng environments.

Explain process crea on using MPMD model and SPMD model in detail.
Process crea on in parallel and distributed compu ng can follow different models, including
the Mul ple Program, Mul ple Data (MPMD) model and the Single Program, Mul ple Data
(SPMD) model. Let's explore each model in detail:
MPMD Model (Mul ple Program, Mul ple Data):
In the MPMD model, mul ple programs are executed concurrently by mul ple processes,
each performing different tasks and opera ng on different data. Here's how process
crea on works in the MPMD model:
1. Different Programs: In the MPMD model, each process executes a different program,
which may have dis nct func onali es or algorithms. These programs could be
wri en in different programming languages or implement different computa onal
tasks.
2. Independent Execu on: Each process operates independently, running its designated
program without interac on with other processes ini ally.
3. Diverse Data Inputs: The programs executed by different processes may operate on
different sets of input data, leading to diverse computa on across processes.

Made By Rohit Makwana


4. Asynchronous Communica on: Communica on between processes in the MPMD
model is typically asynchronous, meaning that processes can exchange data or
messages at any me during their execu on.
5. Example: An example of the MPMD model is a distributed compu ng system where
different nodes execute dis nct computa onal tasks. For instance, one node might
perform data preprocessing, while another node executes a machine learning
algorithm on the preprocessed data.
SPMD Model (Single Program, Mul ple Data):
In the SPMD model, mul ple processes execute the same program concurrently, but each
process operates on different por ons of the input data. Here's how process crea on works
in the SPMD model:
1. Same Program: All processes execute the same program, which contains the logic for
the computa on to be performed. This program may include condi onal branches or
loops to handle different por ons of the data.
2. Data Parallelism: Each process operates on a different subset of the input data,
allowing for parallel execu on of the same computa on on mul ple data elements.
3. Synchroniza on Points: Processes in the SPMD model o en synchronize at certain
points during execu on, such as before accessing shared resources or when
coordina ng results.
4. Data Distribu on: Input data is typically par oned or distributed among processes
before execu on begins. Each process works on its assigned por on of the data
independently.
5. Example: An example of the SPMD model is a parallelized image processing
applica on where each process applies the same image filter algorithm to different
parts of an image. Each process operates independently on its assigned por on of the
image data.

Explain Synchronous and Asynchronous Message Passing in detail with difference.


Synchronous and asynchronous message passing are two different approaches to
communica on between processes in parallel and distributed compu ng systems. Let's
delve into each approach in detail, highligh ng their characteris cs, differences, and use
cases:
Synchronous Message Passing:
Characteris cs:

Made By Rohit Makwana


1. Blocking Opera on: In synchronous message passing, the sender blocks un l the
receiver acknowledges receipt of the message.
2. Ordered Communica on: Messages are sent and received in a specific order,
preserving the sequen al execu on of opera ons.
3. Implicit Synchroniza on: Synchronous message passing provides implicit
synchroniza on between sender and receiver processes. The sender waits for
acknowledgment from the receiver before proceeding with further computa ons.
Example Scenario: In a synchronous communica on scenario, a client sends a request to a
server and waits for the server to process the request and send a response back. The client
blocks un l it receives the response from the server, ensuring that it processes the server's
response in the correct order.
Asynchronous Message Passing:
Characteris cs:
1. Non-Blocking Opera on: In asynchronous message passing, the sender con nues
execu on immediately a er sending the message, without wai ng for the receiver's
acknowledgment.
2. Unordered Communica on: Messages may be sent and received in arbitrary order,
without strict sequencing constraints.
3. Explicit Synchroniza on: Asynchronous message passing requires explicit
synchroniza on mechanisms to coordinate communica on between sender and
receiver processes.
Example Scenario: In an asynchronous communica on scenario, a producer process
generates data and sends it to a consumer process for processing. The producer con nues
genera ng data and sending it asynchronously to the consumer, without wai ng for the
consumer to process each message immediately.

Explain Blocking and Nonblocking Message Passing in detail with difference

Blocking and nonblocking message passing are two different approaches to communica on
between processes in parallel and distributed compu ng systems. Let's explore each
approach in detail, highligh ng their characteris cs, differences, and use cases:
Blocking Message Passing:
Characteris cs:

Made By Rohit Makwana


1. Blocking Opera on: In blocking message passing, the sender blocks un l the message
is successfully sent or received by the receiver.
2. Synchronous Communica on: Blocking message passing o en implies synchronous
communica on, where the sender and receiver synchronize their opera ons through
message exchange.
3. Implicit Synchroniza on: Blocking message passing provides implicit synchroniza on
between sender and receiver processes. The sender waits un l the receiver is ready
to receive the message, and vice versa.
Workflow:
1. Sending Process: The sending process ini ates the communica on by sending a
message to the receiving process.
2. Blocking Send: A er sending the message, the sender blocks un l the message is
successfully transmi ed to the receiver or a specified meout occurs.
3. Receiving Process: The receiving process waits to receive the message sent by the
sender.
4. Blocking Receive: The receiver blocks un l the message arrives or a specified meout
occurs. Once the message is received, the receiver processes it and con nues its
execu on.
Example Scenario: In a blocking communica on scenario, a client sends a request to a
server and waits for the server to process the request and send a response back. The client
blocks un l it receives the response from the server, ensuring synchronous and orderly
communica on.
Nonblocking Message Passing:
Characteris cs:
1. Non-Blocking Opera on: In nonblocking message passing, the sender con nues
execu on immediately a er sending the message, without wai ng for the receiver's
acknowledgment.
2. Asynchronous Communica on: Nonblocking message passing o en implies
asynchronous communica on, where the sender and receiver operate independently
of each other.
Workflow:
1. Sending Process: The sending process ini ates the communica on by sending a
message to the receiving process.

Made By Rohit Makwana


2. Non-Blocking Send: A er sending the message, the sender con nues execu on
immediately without wai ng for acknowledgment from the receiver.
3. Receiving Process: The receiving process con nues its execu on and may check for
the arrival of messages at a later me.
4. Non-Blocking Receive: The receiver checks for the arrival of messages periodically or
asynchronously. If a message is available, it processes it; otherwise, it con nues its
execu on without blocking.
Example Scenario: In a nonblocking communica on scenario, a producer process generates
data and sends it to a consumer process for processing. The producer con nues genera ng
data and sending it asynchronously to the consumer, without wai ng for the consumer to
process each message immediately.

Explain Debugging and Evalua ng Parallel Programs Empirical in detail. :


Debugging and evalua ng parallel programs are cri cal tasks in parallel and distributed
compu ng to ensure correctness, performance, and efficiency. Let's delve into each aspect
in detail:
Debugging Parallel Programs:
Debugging parallel programs involves iden fying and fixing errors or bugs that occur during
execu on. Due to the concurrent and distributed nature of parallel programs, debugging
can be more challenging compared to sequen al programs. Here's an overview of the
debugging process:
1. Iden fying Bugs: Detec ng bugs in parallel programs can be challenging due to non-
determinis c behavior, race condi ons, deadlocks, and synchroniza on issues.
Debugging tools and techniques such as logging, tracing, and run me monitoring are
used to iden fy bugs.
2. Reproducing Errors: Reproducing errors in parallel programs can be difficult due to
their non-determinis c nature. Developers may use techniques such as random input
genera on, stress tes ng, or running the program with different configura ons to
reproduce errors consistently.
3. Debugging Tools: Specialized debugging tools and libraries are available for
debugging parallel programs. These tools provide features such as breakpoints,
watchpoints, stack traces, and variable inspec on to help developers iden fy and
diagnose issues.
4. Parallel Debugging Techniques: Parallel debugging techniques involve analyzing
program execu on across mul ple processes or threads. Techniques such as

Made By Rohit Makwana


distributed breakpoints, parallel watchpoints, and message tracing are used to debug
parallel programs effec vely.
5. Tes ng and Valida on: Thorough tes ng and valida on are essen al for debugging
parallel programs. Unit tests, integra on tests, regression tests, and stress tests help
verify the correctness and reliability of parallel code.
6. Collabora ve Debugging: Collabora ve debugging involves mul ple developers
working together to debug parallel programs. Techniques such as pair programming,
code reviews, and remote debugging sessions facilitate collabora on and knowledge
sharing.
Evalua ng Parallel Programs Empirically:
Evalua ng parallel programs empirically involves assessing their performance, scalability,
efficiency, and other characteris cs through experimenta on and measurement. Here's
how it's done:
1. Performance Metrics: Define performance metrics to measure the effec veness of
parallel programs, such as execu on me, speedup, scalability, throughput, and
resource u liza on.
2. Experimental Setup: Design experiments to evaluate the performance of parallel
programs under different condi ons, including varying input sizes, workload
distribu ons, and hardware configura ons. Use real-world datasets or synthe c data
for experimenta on.
3. Benchmarking: Compare the performance of parallel programs against baseline
benchmarks or reference implementa ons. Benchmark suites and standardized
benchmarks help establish performance baselines and facilitate fair comparisons.
4. Profiling and Monitoring: Use profiling tools and performance monitoring u li es to
analyze the behavior of parallel programs during execu on. Iden fy performance
bo lenecks, resource conten on, and hotspots that impact program performance.
5. Scalability Analysis: Assess the scalability of parallel programs by measuring their
performance as the problem size or number of processing units (nodes or cores)
increases. Evaluate strong scalability and weak scalability under different condi ons.
6. Op miza on Strategies: Iden fy opportuni es for op miza on based on empirical
evalua on results. Techniques such as paralleliza on strategies, load balancing,
algorithmic improvements, and system tuning can improve the performance and
efficiency of parallel programs.
7. Itera ve Improvement: Iterate on the design, implementa on, and evalua on of
parallel programs based on empirical results. Fine-tune parameters, adjust
algorithms, and op mize code to achieve desired performance goals
Made By Rohit Makwana
Made By Rohit Makwana

You might also like