PIPELINING AND VECTOR PROCESSING
• Parallel Processing
• Pipelining
• Arithmetic Pipeline
• Instruction Pipeline
• RISC Pipeline
• Vector Processing
• Array Processors
Parallel Processing
PARALLEL PROCESSING
Execution of Concurrent Events in the computing
process to achieve faster Computational Speed
Levels of Parallel Processing
- Job or Program level
- Task or Procedure level
- Inter-Instruction level
- Intra-Instruction level
Parallel Processing
PARALLEL COMPUTERS
Architectural Classification
Flynn's classification
Based on the multiplicity of Instruction Streams and Data Streams
Instruction Stream
Sequence of Instructions read from memory
Data Stream
Operations performed on the data in the processor
Number of Data Streams
Single Multiple
Number of Single SISD SIMD
Instruction
Streams Multiple MISD MIMD
Parallel Processing
COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING
Von-Neuman SISD Superscalar processors
based
Superpipelined processors
VLIW
MISD Nonexistence
SIMD Array processors
Systolic arrays
Dataflow
Associative processors
MIMD Shared-memory multiprocessors
Reduction
Bus based
Crossbar switch based
Multistage IN based
Message-passing multicomputers
Hypercube
Mesh
Parallel Processing
SISD COMPUTER SYSTEMS
Control Processor Data stream Memory
Unit Unit
Instruction stream
Characteristics
- Standard von Neumann machine
- Instructions and data are stored in memory
- One operation at a time
Limitations
Von Neumann bottleneck
Maximum speed of the system is limited by the
Memory Bandwidth (bits/sec or bytes/sec)
- Limitation on Memory Bandwidth
- Memory is shared by CPU and I/O
Parallel Processing
SISD PERFORMANCE IMPROVEMENTS
• Multiprogramming
• Spooling
• Multifunction processor
• Pipelining
• Exploiting instruction-level parallelism
- Superscalar
- Superpipelining
- VLIW (Very Long Instruction Word)
Parallel Processing
MISD COMPUTER SYSTEMS
M CU P
M CU P Memory
• •
• •
• •
M CU P Data stream
Instruction stream
Characteristics
- There is no computer at present that can be
classified as MISD
Parallel Processing
SIMD COMPUTER SYSTEMS
Memory
Data bus
Control Unit
Instruction stream
P P ••• P Processor units
Data stream
Alignment network
M M ••• M Memory modules
Characteristics
- Only one copy of the program exists
- A single controller executes one instruction at a time
Parallel Processing
TYPES OF SIMD COMPUTERS
Array Processors
- The control unit broadcasts instructions to all PEs,
and all active PEs execute the same instructions
- ILLIAC IV, GF-11, Connection Machine, DAP, MPP
Systolic Arrays
- Regular arrangement of a large number of
very simple processors constructed on
VLSI circuits
- CMU Warp, Purdue CHiP
Associative Processors
- Content addressing
- Data transformation operations over many sets
of arguments with a single instruction
- STARAN, PEPE
Parallel Processing
MIMD COMPUTER SYSTEMS
P M P M ••• P M
Interconnection Network
Shared Memory
Characteristics
- Multiple processing units
- Execution of multiple instructions on multiple data
Types of MIMD computer systems
- Shared memory multiprocessors
- Message-passing multicomputers
Parallel Processing
SHARED MEMORY MULTIPROCESSORS
M M ••• M
Buses,
Interconnection Network(IN) Multistage IN,
Crossbar Switch
P P ••• P
Characteristics
All processors have equally direct access to
one large memory address space
Example systems
Bus and cache-based systems
- Sequent Balance, Encore Multimax
Multistage IN-based systems
- Ultracomputer, Butterfly, RP3, HEP
Crossbar switch-based systems
- C.mmp, Alliant FX/8
Limitations
Memory access latency
Hot spot problem
Parallel Processing
MESSAGE-PASSING MULTICOMPUTER
Message-Passing Network Point-to-point connections
P P ••• P
M M ••• M
Characteristics
- Interconnected computers
- Each processor has its own memory, and
communicate via message-passing
Example systems
- Tree structure: Teradata, DADO
- Mesh-connected: Rediflow, Series 2010, J-Machine
- Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III
Limitations
- Communication overhead
- Hard to programming
Pipelining
PIPELINING
A technique of decomposing a sequential process
into suboperations, with each subprocess being
executed in a partial dedicated segment that
operates concurrently with all other segments.
Ai * Bi + Ci for i = 1, 2, 3, ... , 7
Ai Bi Memory Ci
Segment 1
R1 R2
Multiplier
Segment 2
R3 R4
Adder
Segment 3
R5
R1 Ai, R2 Bi Load Ai and Bi
R3 R1 * R2, R4 Ci Multiply and load Ci
R5 R3 + R4 Add
Pipelining
OPERATIONS IN EACH PIPELINE STAGE
Clock Segment 1 Segment 2 Segment 3
Pulse
Number R1 R2 R3 R4 R5
1 A1 B1
2 A2 B2 A1 * B1 C1
3 A3 B3 A2 * B2 C2 A1 * B1 + C1
4 A4 B4 A3 * B3 C3 A2 * B2 + C2
5 A5 B5 A4 * B4 C4 A3 * B3 + C3
6 A6 B6 A5 * B5 C5 A4 * B4 + C4
7 A7 B7 A6 * B6 C6 A5 * B5 + C5
8 A7 * B7 C7 A6 * B6 + C6
9 A7 * B7 + C7
Pipelining
GENERAL PIPELINE
General Structure of a 4-Segment Pipeline
Clock
Input S1 R1 S2 R2 S3 R3 S4 R4
Space-Time Diagram
1 2 3 4 5 6 7 8 9 Clock cycles
Segment 1 T1 T2 T3 T4 T5 T6
2 T1 T2 T3 T4 T5 T6
3 T1 T2 T3 T4 T5 T6
4 T1 T2 T3 T4 T5 T6
Pipelining
PIPELINE SPEEDUP
n: Number of tasks to be performed
Conventional Machine (Non-Pipelined)
tn: Clock cycle
: Time required to complete the n tasks
= n * tn
Pipelined Machine (k stages)
tp: Clock cycle (time to complete each suboperation)
: Time required to complete the n tasks
= (k + n - 1) * tp
Speedup
Sk: Speedup
Sk = n*tn / (k + n - 1)*tp
tn
lim Sk = ( = k, if tn = k * tp )
n→ tp
Pipelining
PIPELINE AND MULTIPLE FUNCTION UNITS
Example
- 4-stage pipeline
- subopertion in each stage; tp = 20nS
- 100 tasks to be executed
- 1 task in non-pipelined system; 20*4 = 80nS
Pipelined System
(k + n - 1)*tp = (4 + 99) * 20 = 2060nS
Non-Pipelined System
n*k*tp = 100 * 80 = 8000nS
Speedup
Sk = 8000 / 2060 = 3.88
Ii I i+1 I i+2 I i+3
4-Stage Pipeline is basically identical to the system
with 4 identical function units
Multiple Functional Units P1 P2 P3 P4
Arithmetic Pipeline
ARITHMETIC PIPELINE
Floating-point adder Exponents Mantissas
a b A B
X = A x 2a
Y = B x 2b R R
[1] Compare the exponents Compare Difference
Segment 1: exponents
[2] Align the mantissa by subtraction
[3] Add/sub the mantissa
[4] Normalize the result R
Segment 2: Choose exponent Align mantissa
Segment 3: Add or subtract
mantissas
R R
Segment 4: Adjust Normalize
exponent result
R R
Arithmetic Pipeline
4-STAGE FLOATING POINT ADDER
A = a x 2p B = b x 2q
p a q b
Stages: Other
Exponent fraction Fraction
S1 subtractor selector
Fraction with min(p,q)
r = max(p,q)
Right shifter
t = |p - q|
S2 Fraction
adder
r c
Leading zero
S3 counter
c
Left shifter
r
d
Exponent
S4 adder
s d
C = A + B = c x 2 r= d x 2 s
(r = max (p,q), 0.5 d < 1)
Instruction Pipeline
INSTRUCTION CYCLE
Six Phases* in an Instruction Cycle
[1] Fetch an instruction from memory
[2] Decode the instruction
[3] Calculate the effective address of the operand
[4] Fetch the operands from memory
[5] Execute the operation
[6] Store the result in the proper place
* Some instructions skip some phases
* Effective address calculation can be done in
the part of the decoding phase
* Storage of the operation result into a register
is done automatically in the execution phase
==> 4-Stage Pipeline
[1] FI: Fetch an instruction from memory
[2] DA: Decode the instruction and calculate
the effective address of the operand
[3] FO: Fetch the operand
[4] EX: Execute the operation
Instruction Pipeline
INSTRUCTION PIPELINE
Execution of Three Instructions in a 4-Stage Pipeline
Conventional
i FI DA FO EX
i+1 FI DA FO EX
i+2 FI DA FO EX
Pipelined
i FI DA FO EX
i+1 FI DA FO EX
i+2 FI DA FO EX
Instruction Pipeline
INSTRUCTION EXECUTION IN A 4-STAGE PIPELINE
Segment1: Fetch instruction
from memory
Decode instruction
Segment2: and calculate
effective address
Branch?
yes
no
Fetch operand
Segment3: from memory
Segment4: Execute instruction
Interrupt yes
Interrupt?
handling
no
Update PC
Empty pipe
Step: 1 2 3 4 5 6 7 8 9 10 11 12 13
Instruction 1 FI DA FO EX
2 FI DA FO EX
(Branch) 3 FI DA FO EX
4 FI FI DA FO EX
5 FI DA FO EX
6 FI DA FO EX
7 FI DA FO EX
Instruction Pipeline
MAJOR HAZARDS IN PIPELINED EXECUTION
Structural hazards(Resource Conflicts)
Hardware Resources required by the instructions in
simultaneous overlapped execution cannot be met
Data hazards (Data Dependency Conflicts)
An instruction scheduled to be executed in the pipeline requires the
result of a previous instruction, which is not yet available
R1 <- B + C ADD DA B,C + Data dependency
R1 <- R1 + 1
INC DA bubble R1 +1
Control hazards
Branches and other instructions that change the PC
make the fetch of the next instruction to be delayed
JMP ID PC + PC Branch address dependency
bubble IF ID OF OE OS
Hazards in pipelines may make it Pipeline Interlock:
necessary to stall the pipeline Detect Hazards Stall until it is cleared
Instruction Pipeline
STRUCTURAL HAZARDS
Structural Hazards
Occur when some resource has not been
duplicated enough to allow all combinations
of instructions in the pipeline to execute
Example: With one memory-port, a data and an instruction fetch
cannot be initiated in the same clock
i FI DA FO EX
i+1 FI DA FO EX
i+2 stall stall FI DA FO EX
The Pipeline is stalled for a structural hazard
<- Two Loads with one port memory
-> Two-port memory will serve without stall
Instruction Pipeline
DATA HAZARDS
Data Hazards
Occurs when the execution of an instruction
depends on the results of a previous instruction
ADD R1, R2, R3
SUB R4, R1, R5
Data hazard can be dealt with either hardware
techniques or software technique
Hardware Technique
Interlock
- hardware detects the data dependencies and delays the scheduling
of the dependent instruction by stalling enough clock cycles
Forwarding (bypassing, short-circuiting)
- Accomplished by a data path that routes a value from a source
(usually an ALU) to a user, bypassing a designated register. This
allows the value to be produced to be used at an earlier stage in the
pipeline than would otherwise be possible
Software Technique
Instruction Scheduling(compiler) for delayed load
Instruction Pipeline
FORWARDING HARDWARE
Example:
Register
file
ADD R1, R2, R3
SUB R4, R1, R5
3-stage Pipeline MUX MUX Bypass
path
I: Instruction Fetch Result
write bus
A: Decode, Read Registers, ALU
ALU Operations
E: Write the result to the
destination register R4
ALU result buffer
ADD I A E
SUB I A E Without Bypassing
SUB I A E With Bypassing
Instruction Pipeline
INSTRUCTION SCHEDULING
a = b + c;
d = e - f;
Unscheduled code: Scheduled Code:
LW Rb, b LW Rb, b
LW Rc, c LW Rc, c
ADD Ra, Rb, Rc LW Re, e
SW a, Ra ADD Ra, Rb, Rc
LW Re, e LW Rf, f
LW Rf, f SW a, Ra
SUB Rd, Re, Rf SUB Rd, Re, Rf
SW d, Rd SW d, Rd
Delayed Load
A load requiring that the following instruction not use its result
Instruction Pipeline
CONTROL HAZARDS
Branch Instructions
- Branch target address is not known until
the branch instruction is completed
Branch
FI DA FO EX
Instruction
Next FI DA FO EX
Instruction
Target address available
- Stall -> waste of cycle times
Dealing with Control Hazards
* Prefetch Target Instruction
* Branch Target Buffer
* Loop Buffer
* Branch Prediction
* Delayed Branch
Instruction Pipeline
CONTROL HAZARDS
Prefetch Target Instruction
Fetch instructions in both streams, branch not taken and branch taken
Both are saved until branch branch is executed. Then, select the right
instruction stream and discard the wrong stream
Branch Target Buffer(BTB; Associative Memory)
Entry: Addr of previously executed branches; Target instruction
and the next few instructions
When fetching an instruction, search BTB.
If found, fetch the instruction stream in BTB;
If not, new stream is fetched and update BTB
Loop Buffer(High Speed Register file)
Storage of entire loop that allows to execute a loop without accessing memory
Branch Prediction
Guessing the branch condition, and fetch an instruction stream based on
the guess. Correct guess eliminates the branch penalty
Delayed Branch
Compiler detects the branch and rearranges the instruction sequence
by inserting useful instructions that keep the pipeline busy
in the presence of a branch instruction
RISC Pipeline
RISC PIPELINE
RISC
- Machine with a very fast clock cycle that
executes at the rate of one instruction per cycle
<- Simple Instruction Set
Fixed Length Instruction Format
Register-to-Register Operations
Instruction Cycles of Three-Stage Instruction Pipeline
Data Manipulation Instructions
I: Instruction Fetch
A: Decode, Read Registers, ALU Operations
E: Write a Register
Load and Store Instructions
I: Instruction Fetch
A: Decode, Evaluate Effective Address
E: Register-to-Memory or Memory-to-Register
Program Control Instructions
I: Instruction Fetch
A: Decode, Evaluate Branch Address
E: Write Register(PC)
RISC Pipeline
DELAYED LOAD
LOAD: R1 M[address 1]
LOAD: R2 M[address 2]
ADD: R3 R1 + R2
STORE: M[address 3] R3
Three-segment pipeline timing
Pipeline timing with data conflict
clock cycle 1 2 3 4 5 6
Load R1 I A E
Load R2 I A E
Add R1+R2 I A E
Store R3 I A E
Pipeline timing with delayed load
clock cycle 1 2 3 4 5 6 7 The data dependency is taken
Load R1 I A E care by the compiler rather
Load R2 I A E than the hardware
NOP I A E
Add R1+R2 I A E
Store R3 I A E
RISC Pipeline
DELAYED BRANCH
Compiler analyzes the instructions before and after
the branch and rearranges the program sequence by
inserting useful instructions in the delay steps
Using no-operation instructions
Clock cycles: 1 2 3 4 5 6 7 8 9 10
1. Load I A E
2. Incr ement I A E
3. Add I A E
4. Subtract I A E
5. Br anch to X I A E
6. NOP I A E
7. NOP I A E
8. Instr. in X I A E
Rearranging the instructions
Clock cycles: 1 2 3 4 5 6 7 8
1. Load I A E
2. Incr eme nt I A E
3. Br anch to X I A E
4. Add I A E
5. Subtract I A E
6. Instr. in X I A E
Vector Processing
VECTOR PROCESSING
Vector Processing Applications
Problems that can be efficiently formulated in terms of vectors
Long-range weather forecasting
Petroleum explorations
Seismic data analysis
Medical diagnosis
Aerodynamics and space flight simulations
Artificial intelligence and expert systems
Mapping the human genome
Image processing
Vector Processor (computer)
Ability to process vectors, and related data structures such as matrices
and multi-dimensional arrays, much faster than conventional computers
Vector Processors may also be pipelined
Vector Processing
VECTOR PROGRAMMING
DO 20 I = 1, 100
20 C(I) = B(I) + A(I)
Conventional computer
Initialize I = 0
20 Read A(I)
Read B(I)
Store C(I) = A(I) + B(I)
Increment I = i + 1
If I 100 goto 20
Vector computer
C(1:100) = A(1:100) + B(1:100)
Vector Processing
VECTOR INSTRUCTIONS
f1: V V
f2: V S V: Vector operand
f3: V x V V
S: Scalar operand
f4: V x S V
Type Mnemonic Description (I = 1, ..., n)
f1 VSQR Vector square root B(I) SQR(A(I))
VSIN Vector sine B(I) sin(A(I))
VCOM Vector complement A(I) A(I)
f2 VSUM Vector summation S A(I)
VMAX Vector maximum S max{A(I)}
f3 VADD Vector add C(I) A(I) + B(I)
VMPY Vector multiply C(I) A(I) * B(I)
VAND Vector AND C(I) A(I) . B(I)
VLAR Vector larger C(I) max(A(I),B(I))
VTGE Vector test > C(I) 0 if A(I) < B(I)
C(I) 1 if A(I) > B(I)
f4 SADD Vector-scalar add B(I) S + A(I)
SDIV Vector-scalar divide B(I) A(I) / S
Vector Processing
VECTOR INSTRUCTION FORMAT
Vector Instruction Format
Opera tion Base address Base address Base address Vector
code sour ce 1 sour ce 2 destination length
Pipeline for Inner Product
Source
A
Source Multiplier Adder
B pipeline pipeline
Vector Processing
MULTIPLE MEMORY MODULE AND INTERLEAVING
Multiple Module Memory
Address bus
M0 M1 M2 M3
AR AR AR AR
Memory Memory Memory Memory
array array array array
DR DR DR DR
Data bus
Address Interleaving
Different sets of addresses are assigned to
different memory modules