Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
60 views7 pages

Interconnection Structures

A multiprocessor system consists of multiple CPUs that work together, distinguished from multicomputers by their single OS control. Benefits of multiprocessing include increased reliability and improved performance through parallel task execution. Various interconnection structures, such as time-shared common bus and hypercube systems, facilitate communication and resource sharing among processors, while cache coherence issues arise from multiple caches holding the same data.

Uploaded by

DEEP ESHH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views7 pages

Interconnection Structures

A multiprocessor system consists of multiple CPUs that work together, distinguished from multicomputers by their single OS control. Benefits of multiprocessing include increased reliability and improved performance through parallel task execution. Various interconnection structures, such as time-shared common bus and hypercube systems, facilitate communication and resource sharing among processors, while cache coherence issues arise from multiple caches holding the same data.

Uploaded by

DEEP ESHH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Multiprocessors and their characteristics

A multiprocessor system is an interconnection of two or more CPUs with memory and input-
output equipment. The “processor” may be either a central processing unit (CPU) or an input-
output processor (IOP). Multiprocessors are multiple instruction streams, multiple data
stream (MIMD) systems.

There exists a distinction between multiprocessor and multicomputers that though both support
concurrent operations. In multicomputers several autonomous computers are connected through
a network and they may or may not communicate but in a multiprocessor system thereis a single
OS Control that provides interaction between processors and all the components of the system to
cooperate in the solution of the problem.

A multiprocessor system with common shared memory is classified as a shared-


memory or tightly coupled multiprocessor. Each processor element with its own private local
memory is classified as a distributed-memory or loosely coupled system. When the interaction
between tasks is minimal, loosely coupled is most efficient.

Benefits of Multiprocessing:
1. Increases Reliability: Multiprocessing increases the reliability of the system so that a failure
or error in one part has limited effect on the rest of the system. If a fault causes one processor to
fail, a second processor can be assigned to perform the functions of the disabled one.

2. Improved System performance: System derives high performance from the fact that
computations can proceed in parallel in one of the two ways:
a) Multiple independent jobs can be made to operate in parallel.
b) A single job can be partitioned into multiple parallel tasks. This can be achieved in two
ways:
•The user explicitly declares that the tasks of the program be executed in parallel
• The compiler provided with multiprocessor s/w that can automatically detect
parallelism in program. Actually it checks for Data dependency.

Interconnection Structures
Various components of the multiprocessor system are CPU, IOP, and a memory unit. There are
different physical configurations between interconnected components. The physical
configuration depends on the number of transfer paths that are available.between the processors
& memory in a shared memory system and among the processing elements in a loosely coupled
system. Various physical forms are as follows.

 Time-shared common bus


 Multiport memory
 Crossbar switch
 Multistage switching network
 Hypercube system

Prepared by Praches Acharya, Kantipur City College


Time-shared common bus
Time-shared common bus consists of a number of processors connected through a common path
to a memory unit. Part of the local memory may be designed as a cache memory attached to the
CPU

Disadvantages:
Only one processor can communicate with the memory or another processor at any given time.
The total transfer rate within the system is limited by the speed of the single path

Fig: Time shared common bus organization

Multiport Memory

Fig: Multiport memory organization


In the multiport memory system, different memory module and CPUs have separate buses. The
module has internal control logic to determine port which will access to memory at any given
time. Priorities are assigned to each memory port to resolve memory access conflicts.

Advantages:
Because of the multiple paths high transfer rate can be achieved.

Disadvantages:
It requires expensive memory control logic and a large number of cables and connections.

Prepared by Praches Acharya, Kantipur City College


Crossbar switch

Fig:Crossbar Switch

It Consists of a various number of crosspoints that are present at intersections between processor
buses and memory module paths. A switch determines the path from a processor to a memory
module.

Advantages:
Supports simultaneous transfers from all memory modules

Disadvantages:
The hardware required to implement the switch can be very large and complex.

Multistage Switching Network:

Fig: 8 x 8 Omega Switching Network

Prepared by Praches Acharya, Kantipur City College


The basic components are a two-input, two-output interchange switch. One topology is the
omega switching network shown in fig above. Some request patterns cannot be connected
simultaneously. i.e., any two sources cannot be connected simultaneously to destination 000 and
001. The source is a processor and the destination is a memory module, in a tightly coupled
multiprocessor system, Set up the path à transfer the address into memory à transfer the data.
Both the source and destination are processing elements in a loosely coupled multiprocessor
system.

Hypercube System
The hypercube or binary n-cube multiprocessor structure is a loosely coupled system which is
composed of N=2nprocessors interconnected in an n-dimensional binary cube. Routing messages
through an n-cube structure may take from one to n links from a source to a destination node. It
consists of 128(here n=7) microcomputers, where each node consists of a CPU, a floating point
processor, local memory, and serial communication interface units.

Fig: Hypercube Structures for n=1,2,3

INTERPROCESSOR ARBITRATION
Only one of CPU, IOP, and Memory can be granted to use the bus at a time. Arbitration
mechanism is needed to handle multiple requests to the shared resources to resolve multiple
contentions.

System Bus:
A bus that connects the major components such as CPU’s, IOP’s and memory. A typical System
bus consists of 100 signal lines divided into three functional groups: data, address and control
lines. In addition there are power distribution lines to the components.

Interprocessor Communication
The various processors in a multiprocessor system must be provided with a facility
for communicating with each other. The sending processor structures a request, a message, or a
procedure, and places it in the memory mailbox.
A more efficient procedure is for the sending processor to alert the receiving processor by means
of an interrupt signal. In addition to shared memory, a multiprocessor system may have various
other shared resources. e.g., a magnetic disk storage unit. There must be a provision for assigning
resources to processors to prevent conflicting use of shared resources i.e., operating system.

Prepared by Praches Acharya, Kantipur City College


There are three organizations that are used in the design of operating system for
multiprocessors: master-slave configuration, separate operating system, and distributed
operating system. One processor, master, always executes the operating system functions,in a
master-slave mode.
Each processor can execute the operating system routines it needs in the separate operating
system organization. This organization is suitable for loosely coupled systems.
In the distributed operating system organization, the operating system routines are distributed
among the available processors. However, each operating system function is assigned to only
one processor at a particular time. It is also referred to as a floating operating system.

Interprocessor Synchronization
The instruction set of a multiprocessor contains basic instructions which are used for
implementation of communication and synchronization among cooperating processes.
Synchronization is needed to enforce the correct sequence of processes and to ensure mutually
exclusive access to shared writable data.
Multiprocessor systems include various mechanisms to deal with the synchronization of
resources. The hardware implements Low-level primitives directly . These primitives are the
basic mechanisms that mutual exclusion for more complex mechanisms is implemented in
software. For mutual exclusion, a number of hardware mechanisms have been developed such as
a binary semaphore.

Cache Coherence
In a shared memory multiprocessor system, all the processors share a common memory. In
addition, each processor may have a local memory, part or all of which may be a cache. The
compelling reason for having separate caches for each processor is to reduce the average access
time in each processor. The same information may reside in a number of copies in some caches
and main memory. To ensure the ability of the system to execute memory operations correctly,
the multiple copies must be kept identical. This requirement imposes a cache coherence problem.

To illustrate the Cache Coherence problem, consider the three-processor configuration with
private caches shown in Fig. 13-12. Sometime during the operation an element X from main
memory is loaded into the three processors, P1, P2, and P3• As a consequence, it is also copied
into the private caches of the three processors. For simplicity, we assume that X contains the
value of 52. The load on X to the three processors results in consistent copies in the caches and
main memory.

Prepared by Praches Acharya, Kantipur City College


If one of the processors performs a store to X, the copies of X in the caches become inconsistent.
A load by the other processors will not return the latest value. Depending on the memory update
policy used in the cache, the main memory may also be inconsistent with respect to the cache.
This is shown in Fig. 13-13. A store to X (of the value of 120) into the cache of processor P1
updates memory to the new value in a write-through policy. A write-through policy maintains
consistency between memory and the originating cache, but the other two caches are inconsistent
since they still hold the old value. In a write-back policy, main memory is not updated at the time
of the store. The copies in the other two caches and main memory are inconsistent. Memory is
updated eventually when the modified data in the cache are copied back into memory.

Solutions to the Cache Coherence Problem


Various schemes have been proposed to solve the cache coherence problem in shared memory
multiprocessors. We discuss some of these schemes briefly here.

(1) Use of only Shared Cache memory:


A simple scheme is to disallow private caches for each processor and have a shared cache
memory associated with main memory. Every data access is made to the shared cache. This
method violates the principle of closeness of CPU to cache and increases the average memory
access time.

Prepared by Praches Acharya, Kantipur City College


(2) Snoopy Cache Controller:
It is a hardware approach solution for cache coherence. The cache controller is specially
designed to allow it to monitor all bus requests from CPUs and IOPs. All caches attached to the
bus constantly monitor the network for possible write operations. Depending on the method
used, they must then either update or invalidate their own cache copies when a match is detected.
The bus controller that monitors this action is referred to as a snoopy Cache Controller. This is
basically a hardware unit designed to maintain a bus-watching mechanism over all the caches
attached to the bus.

All the snoopy controllers watch the bus for memory store operations. When a word in a cache is
updated by writing into it, the corresponding location in main memory is also updated. The local
snoopy controllers in all other caches check their memory to determine if they have a copy of the
word that has been overwritten. If a copy exists in a remote cache, that location is marked
invalid. Because all caches snoop on all bus writes, whenever a word is written, the net effect is
to update it in the original cache and main memory and remove it from all other caches.

Prepared by Praches Acharya, Kantipur City College

You might also like