UNIVERSITY OF SCIENCE AND TECHNOLOGY CHITTAGONG
FACULTY OF SCIENCE ENGINEERING AND TECHNOLOGY
Department of Computer Science and Engineering
Course Title: Algorithms
Course Code: CSE 221
Course Instructor: Most Tahamina Khatoon
Assistant Professor
Parallel algorithm
Parallel algorithm and concurrent
processing
• A Parallel algorithm is an algorithm that can execute several
instructions simultaneously on different processing devices
and then combine all the individual outputs to produce the
final result.
• Every day we deal with huge volumes of data that require
complex computing and that too, in quick time.
• Sometimes, we need to fetch data from similar or interrelated
events that occur simultaneously. This is where we
require concurrent processing that can divide a complex task
and process it multiple systems to produce the output in quick
time.
Parallel algorithm and concurrent
processing (continued)
• Concurrent processing is essential where the task involves
processing a huge bulk of complex data.
• Examples include − accessing large databases, aircraft testing,
astronomical calculations, atomic and nuclear physics,
biomedical analysis, economic planning, image processing,
robotics, weather forecasting, web-based services, etc.
Parallelism
• Parallelism is the process of processing several set of
instructions simultaneously.
• It reduces the total computational time. Parallelism can be
implemented by using Parallel computers, i.e., a computer
with many processors.
• Parallel computers require Parallel algorithm, programming
languages, compilers and operating system that support
multitasking.
Architecture of a Computer
• While designing an algorithm, one should consider
the architecture of computer on which the algorithm
will be executed. As per the architecture, there are
two types of computers −
• Sequential Computer
• Parallel Computer
Architecture of a Computer
(continued)
• Depending on the architecture of computers, we
have two types of algorithms −
• Sequential Algorithm − An algorithm in which
some consecutive steps of instructions are executed
in a chronological order to solve a problem.
• Parallel Algorithm − The problem is divided into
sub-problems and are executed in parallel to get
individual outputs. Later on, these individual
outputs are combined together to get the final
desired output.
Architecture of a Computer
(continued)
• It is not easy to divide a large problem into sub-
problems. Sub-problems may have data dependency
among them. Therefore, the processors have to
communicate with each other to solve the problem.
• It has been found that the time needed by the
processors in communicating with each other is more
than the actual processing time. So, while designing a
parallel algorithm, proper CPU utilization should be
considered to get an efficient algorithm.
Model of Computation
• Both sequential and parallel computers operate on a set (stream) of
instructions called algorithms. These set of instructions (algorithm)
instruct the computer about what it has to do in each step.
• Depending on the instruction stream and data stream, computers can
be classified into four categories −
• Single Instruction stream, Single Data stream (SISD) computers
• Single Instruction stream, Multiple Data stream (SIMD) computers
• Multiple Instruction stream, Single Data stream (MISD) computers
• Multiple Instruction stream, Multiple Data stream (MIMD)
computers
SISD (single instruction stream, single
data stream) computers
SIMD (single instruction stream, multiple
data stream) computers
SIMD (continued)
• Here, one single control unit sends instructions to all processing
units. During computation, at each step, all the processors receive a
single set of instructions from the control unit and operate on
different set of data from the memory unit.
• Each of the processing units has its own local memory unit to store
both data and instructions. In SIMD computers, processors need to
communicate among themselves. This is done by shared
memory or by interconnection network.
• While some of the processors execute a set of instructions, the
remaining processors wait for their next set of instructions.
Instructions from the control unit decides which processor will
be active (execute instructions) or inactive (wait for next
instruction).
MISD (multiple instruction stream,
single data stream) computers
MISD (continued)
• Here, each processor has its own control unit and they
share a common memory unit. All the processors get
instructions individually from their own control unit
and they operate on a single stream of data as per the
instructions they have received from their respective
control units. This processor operates simultaneously.
MIMD (multiple instruction stream,
multiple data stream) computers
MIMD (continued)
• Here, each processor has its own control unit, local memory unit, and
arithmetic and logic unit. They receive different sets of instructions from
their respective control units and operate on different sets of data.
• An MIMD computer that shares a common memory is known
as multiprocessors, while those that uses an interconnection
network is known as multicomputers.
• Based on the physical distance of the processors,
multicomputers are of two types −
• Multicomputer − When all the processors are very close to
one another (e.g., in the same room).
• Distributed system − When all the processors are far away
from one another (e.g.- in the different cities)
Parallel Algorithm Models
• The model of a parallel algorithm is developed by considering
a strategy for dividing the data and processing method and
applying a suitable strategy to reduce interactions.
• The following Parallel Algorithm Models are:
Data parallel model
Task graph model
Work pool model
Master slave model
Producer consumer or pipeline model
Hybrid model
Thank You