INTERCONNECTION NETWORKS
Multiprocessors interconnection networks (INs) can be classified
based on a number of criteria.
(1) mode of operation (synchronous versus asynchronous).
(2) control strategy (centralized versus decentralized).
(3) switching techniques (circuit versus packet).
(4) topology (static versus dynamic).
Mode of Operation
In synchronous mode of operation, a single global clock is used by
all components in the system such that the whole system is
operating in a lock–step manner.
Asynchronous mode of operation, on the other hand, does not
require a global clock. Handshaking signals are used instead in
order to coordinate the operation of asynchronous systems.
synchronous systems tend to be slower compared to asynchronous
systems.
Control Strategy
INs can be classified as centralized versus decentralized.
centralized control systems, a single central control unit is used to
oversee and control the operation of the components of the
system.
Decentralized control, the control function is distributed among
different components in the system.
The function and reliability of the central control unit can become
the bottleneck in a centralized control system.
• the crossbar is a centralized system.
• the multistage interconnection networks are decentralized.
Switching Techniques
Interconnection networks can be classified according to the
switching mechanism as circuit versus packet switching networks.
In the circuit switching mechanism, a complete path has to be
established prior to the start of communication between a source
and a destination. The established path will remain in existence
during the whole communication period.
In a packet switching mechanism, communication between a source
and destination takes place via messages that are divided into
smaller entities, called packets. On their way to the destination,
packets can be sent from a node to another in a store-and-forward
manner until they reach their destination.
While packet switching tends to use the network resources more
efficiently compared to circuit switching, it suffers from variable
packet delays.
Topology
the topology describes how to connect processors and memories to
other processors and memories.
interconnection networks can be classified as static versus dynamic
networks.
In static networks, direct fixed links are established among nodes to
form a fixed network, while in dynamic networks, connections are
established as needed.
Switching elements are used to establish connections among inputs
and outputs. Depending on the switch settings, different
interconnections can be established.
Shared memory systems can be designed using bus-based or switch-
based INs.
Topology
The simplest IN for shared memory systems is the bus. However, the
bus may get saturated if multiple processors are trying to access the
shared memory (via the bus) simultaneously.
A typical bus-based design uses caches to solve the bus contention
problem. Other shared memory designs rely on switches for
interconnection. For example, a crossbar switch can be used to
connect multiple processors to multiple memory modules.
Topology
Topology
Topology
Message passing INs can be divided into static and dynamic. Static
networks form all connections when the system is designed rather
than when the connection is needed. In a static network, messages
must be routed along established links.
Dynamic INs establish a connection between two or more nodes on
the fly as messages are routed along the links. The number of hops
in a path from source to destination node is equal to the number of
point-to-point links a message must traverse to reach its
destination.
the ultimate performance of an interconnection network is greatly
influenced by the number of hops taken to traverse the network.
Topology
Topology
Topology
If the single path between pairs becomes faulty, that pair cannot
communicate. If two pairs attempt to communicate at the same time
along a shared path, one pair must wait for the other. This is called
blocking, and such MINs are called blocking networks. A network that
can handle all possible connections without blocking is called a
nonblocking network.
Multiprocessors Interconnection Networks
a multiprocessor system consists of multiple processing units
connected via some interconnection network plus the software
needed to make the processing units work together.
There are two major factors used to categorize such systems: the
processing units themselves, and the interconnection network that ties
them together.
A number of communication styles exist for multiprocessing networks.
These can be broadly classified according to the communication model
as shared memory (single address space) versus message passing
(multiple address spaces).
Communication in shared memory systems is performed by writing to
and reading from the global memory, while communication in message
passing systems is accomplished via send and receive commands.
Multiprocessors Interconnection Networks
interconnection network plays a major role in determining the
communication speed.
Two schemes are introduced, namely static and dynamic
interconnection networks. Static networks form all connections
when the system is designed rather than when the connection is
needed.
In a static network, messages must be routed along established
links.
Dynamic interconnection networks establish connections between
two or more nodes on the fly as messages are routed along the
links. The hypercube, mesh, and k-ary n-cube topologies are
introduced as examples for static networks.
The bus, crossbar, and multistage interconnection topologies are
introduced as examples for dynamic interconnection networks.
Multiprocessors Interconnection Networks
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Single Bus Systems
A single bus is considered the simplest way to connect
multiprocessor systems.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Single Bus Systems
In its general form, such a system consists of N processors, each
having its own cache, connected by a shared bus.
The use of local caches reduces the processor–memory traffic.
All processors communicate with a single shared memory. The typical
size of such a system varies between 2 and 50 processors.
The actual size is determined by the traffic per processor and the bus
bandwidth (defined as the maximum rate at which the bus can
propagate data once transmission has started).
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Single Bus Systems
simple and easy to expand.
single bus multiprocessors are inherently limited by the bandwidth
of the bus, the fact that only one processor can access the bus, and
in turn only one memory access can take place at any given time.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Multiple Bus Systems
The use of multiple buses to connect multiple processors is a natural
extension to the single shared bus system.
A multiple bus multiprocessor system uses several parallel buses to
interconnect multiple processors and multiple memory modules.
A number of connection schemes are possible in this case.
• multiple bus with full bus–memory connection (MBFBMC).
• multiple bus with single bus memory connection (MBSBMC).
• multiple bus with partial bus– memory connection (MBPBMC).
• multiple bus with class-based memory connection (MBCBMC).
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Multiple Bus Systems
(a) Multiple bus with full bus–memory connection (MBFBMC)
(b) Multiple bus with single bus-memory connection (MBSBMC)
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Multiple Bus Systems
(c) multiple bus with partial bus– memory connection (MBPBMC).
(d) multiple bus with class-based memory connection (MBCBMC).
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Multiple Bus Systems
multiple bus multiprocessor organization offers a number of
desirable features
such as high reliability and ease of incremental growth.
when the number of buses is less than the number of memory
modules (or the number of processors), bus contention is expected
to increase.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Bus Synchronization
A bus can be classified as synchronous or asynchronous.
The time for any transaction over a synchronous bus is known in
advance. In accepting and/or generating information over the bus,
devices take the transaction time into account.
Asynchronous bus, on the other hand, depends on the availability of
data and the readiness of devices to initiate bus transactions.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Bus Synchronization
In a single bus multiprocessor system, bus arbitration is required in
order to resolve the bus contention that takes place when more
than one processor competes to access the bus.
processors that want to use the bus submit their requests to bus
arbitration logic. The latter decides, using a certain priority scheme,
which processor will be granted access to the bus during a certain
time interval (bus master).
The process of passing bus mastership from one processor to
another is called handshaking and requires the use of two control
signals: bus request and bus grant. The first indicates that a given
processor is requesting mastership of the bus, while the second
indicates that bus mastership is granted. A third signal, called bus
busy, is usually used to indicate whether or not the bus is currently
being used.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Bus Synchronization
In deciding which processor gains control of the bus, the bus
arbitration logic uses a predefined priority scheme.
Among the priority schemes used are random priority, simple
rotating priority, equal priority, and least recently used (LRU)
priority.
After each arbitration cycle, in simple rotating priority, all priority
levels are reduced one place, with the lowest priority processor
taking the highest priority.
In equal priority, when two or more requests are made, there is
equal chance of any one request being processed. In the LRU
algorithm, the highest priority is given to the processor that has not
used the bus for the longest time.
BUS-BASED DYNAMIC INTERCONNECTION NETWORKS
Bus Synchronization