Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views4 pages

PDCAssignment 04

The document describes the process of one-to-all broadcast and one-to-all scatter operations in a 4-dimensional hypercube network, detailing the iterations and data exchanges between nodes. It also includes an MPI program that implements these operations, ensuring that exactly 16 processes are used and demonstrating data transmission among them. The program highlights the broadcast of data from the source node and the collection of all data across processes.

Uploaded by

Islam For all
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views4 pages

PDCAssignment 04

The document describes the process of one-to-all broadcast and one-to-all scatter operations in a 4-dimensional hypercube network, detailing the iterations and data exchanges between nodes. It also includes an MPI program that implements these operations, ensuring that exactly 16 processes are used and demonstrating data transmission among them. The program highlights the broadcast of data from the source node and the collection of all data across processes.

Uploaded by

Islam For all
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Muhammad Abubakar

L1F21BSCS0003

Question 1

Part a) One-to-All Broadcast in Hypercube

For the 4-dimensional hypercube (16 nodes labeled 0000 to 1111) with node 0000 as the source:

1. Iteration 1: 0000 sends to 0001 (dimension 0)


2. Iteration 2:
a. 0000 sends to 0010 (dimension 1)
b. 0001 sends to 0011 (dimension 1)
3. Iteration 3:
a. 0000 sends to 0100 (dimension 2)
b. 0001 sends to 0101 (dimension 2)
c. 0010 sends to 0110 (dimension 2)
d. 0011 sends to 0111 (dimension 2)
4. Iteration 4:
a. 0000 sends to 1000 (dimension 3)
b. 0001 sends to 1001 (dimension 3)
c. 0010 sends to 1010 (dimension 3)
d. 0011 sends to 1011 (dimension 3)
e. 0100 sends to 1100 (dimension 3)
f. 0101 sends to 1101 (dimension 3)
g. 0110 sends to 1110 (dimension 3)
h. 0111 sends to 1111 (dimension 3)

Minimum number of iterations required for all-to-all broadcast: 4

Part b) One-to-All Scatter Operation

Steps to scatter array A = {a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p} from node 0000:

1. Initial State: Node 0000 has the entire array A


2. Iteration 1 (Dimension 0):
a. 0000 sends {i-p} to 0001, keeps {a-h}
3. Iteration 2 (Dimension 1):
a. 0000 sends {e-h} to 0010, keeps {a-d}
b. 0001 sends {m-p} to 0011, keeps {i-l}
4. Iteration 3 (Dimension 2):
a. 0000 sends {c,d} to 0100, keeps {a,b}
b. 0010 sends {g,h} to 0110, keeps {e,f}
c. 0001 sends {k,l} to 0101, keeps {i,j}
d. 0011 sends {o,p} to 0111, keeps {m,n}
5. Iteration 4 (Dimension 3):
a. 0000 sends 'b' to 1000, keeps 'a'
b. 0100 sends 'd' to 1100, keeps 'c'
c. 0010 sends 'f' to 1010, keeps 'e'
d. 0110 sends 'h' to 1110, keeps 'g'
e. 0001 sends 'j' to 1001, keeps 'i'
f. 0101 sends 'l' to 1101, keeps 'k'
g. 0011 sends 'n' to 1011, keeps 'm'
h. 0111 sends 'p' to 1111, keeps 'o'

Question 2

MPI Program:

#include <mpi.h>

#include <stdio.h>

#include <string.h>

#define DIM 4

int main(int argc, char** argv) {

MPI_Init(&argc, &argv);

int rank, size;


MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

if (size != 16) {
if (rank == 0) {
printf("This program requires exactly 16 processes\n");
}
MPI_Finalize();
return 1;
}

int data;
if (rank == 0) {
data = 12345;
}

for (int dim = 0; dim < DIM; dim++) {


int mask = 1 << dim;
if ((rank & mask) == 0) {
int partner = rank ^ mask;
if ((rank >> dim) == 0) {
MPI_Send(&data, 1, MPI_INT, partner, 0, MPI_COMM_WORLD);
} else {
MPI_Recv(&data, 1, MPI_INT, partner, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
}
} else {
int partner = rank ^ mask;
MPI_Recv(&data, 1, MPI_INT, partner, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
}
}

printf("Process %d received broadcast data: %d\n", rank, data);

int my_data = rank + 100;


int all_data[16];

for (int dim = 0; dim < DIM; dim++) {


int mask = 1 << dim;
int partner = rank ^ mask;
int recv_data[1 << dim];
MPI_Sendrecv(&my_data, 1 << dim, MPI_INT, partner, 0,
recv_data, 1 << dim, MPI_INT, partner, 0,
MPI_COMM_WORLD, MPI_STATUS_IGNORE);
for (int i = 0; i < (1 << dim); i++) {
my_data = recv_data[i];
}
}

MPI_Allgather(&my_data, 1, MPI_INT, all_data, 1, MPI_INT, MPI_COMM_WORLD);

printf("Process %d has all-to-all data: ", rank);


for (int i = 0; i < size; i++) {
printf("%d ", all_data[i]);
}
printf("\n");

MPI_Finalize();
return 0;

You might also like