Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views21 pages

DPC Lab Manual

Uploaded by

hhdarji1002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views21 pages

DPC Lab Manual

Uploaded by

hhdarji1002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

VIDUSH SOMANY INSTITUTEOF

TECHNOLOGY AND RESEARCH, KADI

KADI SARVA VISHWAVIDYALAYA,


GANDHINAGAR

Lab Manual
IN
Distributed and Parallel Computing
(IT801‐N)
TH
Semester-8

ENROLLMENT NO:

NAME:

BRANCH: INFORMATION TECHNOLOGY


CERTIFICATE

This is to certify that the practical/term work carried in subject of


Distributed and Parallel Computing [IT801-N] and recorded in
this journal is the bonafide work of
Mr/Miss._________________Enrollment No: ___________ of
th th
B.E. Information TechnologyEngineering Year 4 Semester 8
in the branch of IT during the academic year 2022-2023 within the
four walls of this institute.

Faculty in charge Date Head of the Department


INDEX

NO. Name of Experiment Date Grade Sign


1. Parallel Odd‐Even Transposition Sort
2. Write a sample MPI Program to print Hello World.
3. Write a Program to implement MPI Barrier and Synchronization
4. Write a parallel program to sum an array
5. Divide and conquer algorithm implementation using C
6. Write openmp program to print hello world.

7. write open mp program to synchronize the threads

8. Implement Inter Process communication through shared memory.


9. Implement Semaphore in process synchronization
Write a program to implement Calculator using RMI.

10.
Enrollment No : PRACTICAL 1 Distributed and Parallel Computing - IT801‐N

Aim : Parallel Odd‐Even Transposition Sort using Pthread.

#include <bits/stdc++.h>
#include <pthread.h>

using namespace std;

#define n 8

int max_threads = (n + 1) / 2;

int a[] = { 2, 1, 4, 9, 5, 3, 6, 10 };
int tmp;

void* compare(void* arg)


{
int index = tmp;
tmp = tmp + 2;

if ((a[index] > a[index + 1]) && (index + 1 < n)) { swap(a[index],


a[index + 1]);
}
}
void oddEven(pthread_t threads[])
{
int i, j;
for (i = 1; i <= n; i++) {

if (i % 2 == 1) {
tmp = 0;

for (j = 0; j < max_threads; j++)


pthread_create(&threads[j], NULL, compare, NULL);

for (j = 0; j < max_threads; j++)


pthread_join(threads[j], NULL);
}
else {
tmp = 1;
for (j = 0; j < max_threads - 1; j++)

Page 1 of 17
pthread_create(&threads[j], NULL, compare, NULL);

for (j = 0; j < max_threads - 1; j++)


pthread_join(threads[j], NULL);
}
}
}
void printArray()
{
int i;
for (i = 0; i < n; i++)
cout << a[i] << " ";
cout << endl;
}
int main()
{
pthread_t threads[max_threads];
cout << "Given array is: ";
printArray();
oddEven(threads);
cout << "\nSorted array is: ";
printArray();
return 0;
}

OutPut :

Page 2 of 17
Enrollment No PRACTICAL 2 Distributed and Parallel Computing - IT801‐N

Aim : Write a sample MPI Program to print Hello World.

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv)


{
MPI_Init(NULL, NULL);

printf("Hello world ");


MPI_Finalize();
}
Page 3 of 17

Enrollment No PRACTICAL 3 Distributed and Parallel Computing - IT801‐N

Aim : Write a Program to implement MPI Barrier and Synchronization.


#include <stdio.h>
#include <mpi.h>
#include <unistd.h>

// printf()
// MPI
// sleep()
#define MASTER 0

int solveProblem(int id, int numProcs) {

sleep( ((double)id+1) / numProcs);

return 42;
}

int main(int argc, char** argv) {


int id = -1, numProcesses = -1;
double startTime = 0.0, localTime = 0.0, totalTime =
0.0; int answer = 0.0;

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &numProcesses);

MPI_Barrier(MPI_COMM_WORLD);
startTime = MPI_Wtime();

answer = solveProblem(id, numProcesses);

localTime = MPI_Wtime() - startTime;


MPI_Reduce(&localTime, &totalTime, 1,
MPI_DOUBLE, MPI_MAX, 0, MPI_COMM_WORLD);

if ( id == MASTER ) {
printf("\nThe answer is %d; computing it took %f
secs.\n\n", answer, totalTime);
}

MPI_Finalize();
return 0;
}

Page 4 of 17
Enrollment No Distributed and Parallel Computing - IT801‐N

OutPut :

Page 5 of 17
Enrollment No PRACTICAL 4

Aim : Write a parallel program to sum an array.

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#define n 10

int a[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };

int a2[1000];

int main(int argc, char* argv[])


{

int pid, np,


elements_per_process,
n_elements_recieved;

MPI_Status status;
MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD,
&pid);
MPI_Comm_size(MPI_COMM_WORLD,
&np);

if (pid == 0) {
int index, i;
elements_per_process = n / np;

if (np > 1) {
for (i = 1; i < np - 1; i++) {
index = i * elements_per_process;

MPI_Send(&elements_per_process,
1, MPI_INT, i, 0,
MPI_COMM_WORLD);
MPI_Send(&a[index],
elements_per_process,
MPI_INT, i, 0, Page 6 of 17
MPI_COMM_WORLD);
Enrollment No Distributed and Parallel Computing - IT801‐N

index = i * elements_per_process;
int elements_left = n - index;

MPI_Send(&elements_left,
1, MPI_INT,
i, 0,
MPI_COMM_WORLD);
MPI_Send(&a[index],
elements_left,
MPI_INT, i, 0,
MPI_COMM_WORLD);
}

int sum = 0;
for (i = 0; i < elements_per_process; i++)
sum += a[i];

int tmp;
for (i = 1; i < np; i++) {
MPI_Recv(&tmp, 1, MPI_INT,
MPI_ANY_SOURCE, 0,
MPI_COMM_WORLD,
&status);
int sender = status.MPI_SOURCE;

sum += tmp;
}

printf("Sum of array is : %d\n", sum);


}

else {
MPI_Recv(&n_elements_recieved,
1, MPI_INT, 0, 0,
MPI_COMM_WORLD,
&status);

MPI_Recv(&a2, n_elements_recieved,
MPI_INT, 0, 0,
MPI_COMM_WORLD,
&status);

Page 7 of 17
Enrollment No Distributed and Parallel Computing - IT801‐N

int partial_sum = 0;
for (int i = 0; i < n_elements_recieved; i++)
partial_sum += a2[i];

MPI_Send(&partial_sum, 1, MPI_INT,
0, 0, MPI_COMM_WORLD);
}

MPI_Finalize();

return 0;
}

OutPut :

Page 8 of 17
Enrollment No PRACTICAL 5 Distributed and Parallel Computing - IT801‐N

Aim: Divide and conquer algorithm implementation using C

#include <stdio.h>
int DAC_Max(int a[], int index, int l);

int DAC_Max(int a[], int index, int l)


{
int max;
if (index >= l - 2) {
if (a[index] > a[index + 1])
return a[index];
else
return a[index + 1];
}

// logic to find the Maximum element


// in the given array.
max = DAC_Max(a, index + 1, l);

if (a[index] > max)


return a[index];
else
return max;
}
int main()
{
int max, N;
int a[7] = { 70, 250, 50, 80, 140, 12, 14 }; max =
DAC_Max(a, 0, 7);
printf("The maximum number in a given array is : %d", max); return 0;
}

Page 9 of 17
Enrollment No PRACTICAL 6 Distributed and Parallel Computing - IT801‐N

Aim : write openmp program to print hello world.

#include <omp.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[])


{

#pragma omp parallel


{

printf("Hello World...);
}

Page 10 of 17
Enrollment No PRACTICAL 7 Distributed and Parallel Computing - IT801‐N

Aim : write openmp program to synchronize the threads.

#include <unistd.h>
#include <stdlib.h>
#include <omp.h>
#include <stdio.h>

#define THREADS 8

void worker() {
int id = omp_get_thread_num();

printf("Thread %d starting!\n", id);

sleep(id);
printf("Thread %d is done its work!\n", id); #pragma omp
barrier

printf("Thread %d is past the barrier!\n", id);


}

int main() {

# pragma omp parallel num_threads(THREADS)


worker();

return 0;
}

Page 11 of 17
Enrollment No PRACTICAL 8 Distributed and Parallel Computing - IT801‐N

Aim: Implement Inter Process communication through shared


memory.

SHARED MEMORY FOR WRITER PROCESS

#include <iostream>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
using namespace std;

int main()
{
// ftok to generate unique
key key_t key =
ftok("shmfile",65);

// shmget returns an identifier in shmid


int shmid = shmget(key,1024,0666|IPC_CREAT);

// shmat to attach to shared memory


char *str = (char*) shmat(shmid,(void*)0,0);

cout<<"Write Data : ";


gets(str);

printf("Data written in memory: %s\n",str);

//detach from shared memory


shmdt(str);

return 0;
}

SHARED MEMORY FOR READER PROCESS

#include <iostream>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
using namespace std;

int main()
{
// ftok to generate unique
key key_t key =
ftok("shmfile",65);
Page 12 of 17
Enrollment No Distributed and Parallel Computing - IT801‐N

// shmget returns an identifier in shmid


int shmid = shmget(key,1024,0666|IPC_CREAT);

// shmat to attach to shared memory


char *str = (char*) shmat(shmid,(void*)0,0);

printf("Data read from memory: %s\n",str);

//detach from shared memory


shmdt(str);

// destroy the shared memory


shmctl(shmid,IPC_RMID,NULL
);

return 0;
}

Page 13 of 17
Enrollment No PRACTICAL 9 Distributed and Parallel Computing - IT801‐N

Aim: Implement Semaphore in process synchronization.

#include < stdio.h>


#include < pthread.h>
#include < semaphore.h>
#include < unistd.h>
sem_t mutex;
void* thread(void* arg)
{
//wait
sem_wait(&mutex);
printf("\nEntered thread\n");

//critical section
sleep(4);

//signal
printf("\n Exit thread\n");
sem_post(&mutex);
}
int main()
{
sem_init(&mutex, 0, 1);
pthread_t t1,t2;
pthread_create(&t1,NULL,thread,NULL);
sleep(2);
pthread_create(&t2,NULL,thread,NULL);
pthread_join(t1,NULL);
pthread_join(t2,NULL);
sem_destroy(&mutex);
return 0;
}

Output :

Page 14 of 17
Enrollment No PRACTICAL 10 Distributed and Parallel Computing - IT801‐N

Aim: Write a program to implement Calculator using RMI.

 
Calc.java

import java.rmi.Remote;
import java.rmi.RemoteException;
public interface calc extends Remote
{
public long addition(long a,long b)throws RemoteException; public long subtraction(long
a,long b)throws RemoteException; public long multiplication(long a,long b)throws
RemoteException; public long divition(long a,long b)throws RemoteException;
}

 
Calcimpl.java

import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
public class calcimpl extends UnicastRemoteObject implements calc
{
protected calcimpl() throws RemoteException {
super();
}
public long addition(long a,long b)throws RemoteException
{
return a+b;
}
public long subtraction(long a,long b)throws RemoteException
{
return a-b;
}
public long multiplication(long a,long b)throws RemoteException
{
return a*b;
}
public long divition(long a,long b)throws RemoteException
{

Page 15 of 17
Enrollment No Distributed and Parallel Computing - IT801‐N

return a/b;
}
}

 
Calcser.java

import java.rmi.Naming;
public class calcser
{
calcserv()
{
try{
calc c=new calcimpl();
Naming.rebind("rmi://localhost:1099/calcservice",c);
}
catch(Exception e)
{
System.out.println("Exception:"+e);
}
}
public static void main(String arg[])
{
new calcserv();
}
}

 
Calcli.java

import java.rmi.Naming;
public class calcli
{
public static void main(String arg[])
{
try{
calc c=(calc)Naming.lookup("//127.0.0.1:1099/calcservice");
System.out.println("Addition:"+c.addition(50,5));
System.out.println("subtraction:"+c.subtraction(15,5));
System.out.println("Multiplication:"+c.multiplication(20,5));

Page 16 of 17
Enrollment No Distributed and Parallel Computing - IT801‐N

System.out.println("Divition:"+c.divition(50,25));
}
catch(Exception e)
{
System.out.println("Exception:"+e);
}
}
}

OutPut :

  Compile all 4 files.


  
Javac calcimpl.java
  
Javac calcser.java
 
 Javac calcli.java
 
 Javac calc.java


Now usermic to create the stub and skeleton class
files.
  
Rmic calcimpl
  Run this command -> Rmiregistry

  Run calculator file for server and client


  
Java calcserv
 
Java calccli

Page 17 of 17

You might also like