Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
Dr. Shadrokh Samavi
Some slides are from the instructors resources which accompany the 5th and previous editions. Some slides are from David Patterson, David Culler and Krste Asanovic of UC Berkeley; Israel Koren of UM Amherst, Guang R. Gao of Univ of Delaware , Milos Prvulovic of Georgia Tech., Zhenyu Ye of Eindhoven U. of Tech. in Netherlands. Nvidia.com. Otherwise, the source of the slide is mentioned at the bottom of the page. Please send an email if a name is missing in the above list.
Dr. Shadrokh Samavi
If you were plowing a field, which would you rather use: two strong oxen or 1024 chickens? Seymour Cray, Father of the Supercomputer
(arguing for two powerful vector processors versus many simple processors)
Dr. Shadrokh Samavi
Dr. Shadrokh Samavi
Introduction
SIMD architectures can exploit significant data-level parallelism for:
matrix-oriented scientific computing media-oriented image and sound processors
SIMD is more energy efficient than MIMD
Only needs to fetch one instruction per data operation Makes SIMD attractive for personal mobile devices
SIMD allows programmer to continue to think sequentially
Dr. Shadrokh Samavi
5
SIMD Parallelism
Vector architectures SIMD extensions (MMX, SSE, AVX) Graphics Processor Units (GPUs) For x86 processors:
Expect two additional cores per chip per year SIMD width to double every four years Potential speedup from SIMD to be twice that from MIMD!
6
Dr. Shadrokh Samavi
Potential speedup via parallelism from MIMD, SIMD, and both MIMD and SIMD over time for x86 computers. This figure assumes that two cores per chip for MIMD will be added every two years and the number of operations for SIMD will double every four years.
Dr. Shadrokh Samavi
Vector Architectures
Basic idea:
Read sets of data elements into vector registers Operate on those registers Disperse the results back into memory
Registers are controlled by compiler
Used to hide memory latency Leverage memory bandwidth
Dr. Shadrokh Samavi
VMIPS
Example architecture: VMIPS
Loosely based on Cray-1 Vector registers
Each register holds a 64-element, 64 bits/element vector Register file has 16 read ports and 8 write ports
Fully pipelined Data and control hazards are detected
Vector functional units
Vector load-store unit
Scalar registers
Fully pipelined One word per clock cycle after initial latency
32 general-purpose registers 32 floating-point registers 9
Dr. Shadrokh Samavi
Figure 4.2 The basic structure of a vector architecture, VMIPS. This processor has a scalar architecture just like MIPS. There are also eight 64-element vector registers, and all the functional units are vector functional units. There are special vector instructions for both arithmetic and memory accesses. The vector and scalar registers have a significant number of read and write ports to allow multiple simultaneous vector operations. A set of crossbar switches (thick black lines) connects these ports to the inputs and outputs of the vector functional units.
Dr. Shadrokh Samavi
10
Dr. Shadrokh Samavi
11
DAXPY
Dr. Shadrokh Samavi
12
VMIPS Instructions
ADDVV.D: add two vectors ADDVS.D: add vector to a scalar LV/SV: vector load and vector store from address Example: DAXPY
Requires 6 instructions vs. almost 600 for MIPS
L.D LV MULVS.D LV ADDVV SV
F0,a V1,Rx V2,V1,F0 V3,Ry V4,V2,V3 Ry,V4
; load scalar a ; load vector X ; vector-scalar multiply ; load vector Y ; add ; store the result
Dr. Shadrokh Samavi
13
Vector Execution Time
Execution time depends on three factors:
Length of operand vectors Structural hazards Data dependencies
VMIPS functional units consume one element per clock cycle
Execution time is approximately the vector length
Convoy
Set of vector instructions that could potentially execute together
Dr. Shadrokh Samavi
14
Chimes
Sequences with read-after-write dependency hazards can be in the same convoy via chaining
Chaining
Allows a vector operation to start as soon as the individual elements of its vector source operand become available
Chime
Unit of time to execute one convey m conveys executes in m chimes For vector length of n, requires m x n clock cycles
Dr. Shadrokh Samavi
15
Example
LV MULVS.D LV ADDVV.D SV Convoys: 1 LV 2 LV 3 SV V1,Rx V2,V1,F0 V3,Ry V4,V2,V3 Ry,V4 ;load vector X ;vector-scalar multiply ;load vector Y ;add two vectors ;store the sum
MULVS.D ADDVV.D
3 chimes, 2 FP ops per result, cycles per FLOP = 1.5 For 64 element vectors, requires 64 x 3 = 192 clock cycles
Dr. Shadrokh Samavi
16
Challenges
Start up time
Latency of vector functional unit Assume the same as Cray-1
Floating-point add => 6 clock cycles Floating-point multiply => 7 clock cycles Floating-point divide => 20 clock cycles Vector load => 12 clock cycles
Improvements:
> 1 element per clock cycle Non-64 wide vectors IF statements in vector code Memory system optimizations to support vector processors Multiple dimensional matrices Sparse matrices Programming a vector computer 17
Dr. Shadrokh Samavi
Multiple Lanes
Element n of vector register A is hardwired to element n of vector register B
Allows for multiple hardware lanes
Dr. Shadrokh Samavi
18
Vector Length Register
Vector length not known at compile time? Use Vector Length Register (VLR) Use strip mining for vectors over the maximum length:
low = 0; VL = (n % MVL); /*find odd-size piece using modulo op % */ for (j = 0; j <= (n/MVL); j=j+1) { /*outer loop*/ for (i = low; i < (low+VL); i=i+1) /*runs for length VL*/ Y[i] = a * X[i] + Y[i] ; /*main operation*/ low = low + VL; /*start of next vector*/ VL = MVL; /*reset the length to maximum vector length*/
}
Dr. Shadrokh Samavi
19
Vector Mask Registers
Consider: for (i = 0; i < 64; i=i+1) if (X[i] != 0) X[i] = X[i] Y[i]; Use vector mask register to disable elements:
LV LV L.D SNEVS.D SUBVV.D SV V1,Rx V2,Ry F0,#0 V1,F0 V1,V1,V2 Rx,V1 ;load vector X into V1 ;load vector Y ;load FP zero into F0 ;sets VM(i) to 1 if V1(i)!=F0 ;subtract under vector mask ;store the result in X
GFLOPS rate decreases!
Dr. Shadrokh Samavi
20
Memory Banks
Memory system must be designed to support high bandwidth for vector loads and stores
Spread accesses across multiple banks
Control bank addresses independently Load or store non sequential words Support multiple vector processors sharing the same memory
Example:
32 processors, each generating 4 loads and 2 stores/cycle Processor cycle time is 2.167 ns, SRAM cycle time is 15 ns How many memory banks needed?
Dr. Shadrokh Samavi
21
Stride
Consider: for (i = 0; i < 100; i=i+1) for (j = 0; j < 100; j=j+1) { A[i][j] = 0.0; for (k = 0; k < 100; k=k+1) A[i][j] = A[i][j] + B[i][k] * D[k][j]; } Must vectorize multiplication of rows of B with columns of D Use non-unit stride Bank conflict (stall) occurs when the same bank is hit faster than bank busy time:
#banks / LCM(stride,#banks) < bank busy time
Dr. Shadrokh Samavi
22
Example: Suppose we have 8 memory banks with a bank busy time of 6 clocks and a total memory latency of 12 cycles. How long will it take to complete a 64element vector load with a stride of 1? With a stride of 32?
Dr. Shadrokh Samavi
23
Scatter-Gather
Consider: for (i = 0; i < n; i=i+1) A[K[i]] = A[K[i]] + C[M[i]]; Use index vector: LV Vk, Rk LVI Va, (Ra+Vk) LV Vm, Rm LVI Vc, (Rc+Vm) ADDVV.D Va, Va, Vc SVI (Ra+Vk), Va
Dr. Shadrokh Samavi
;load K ;load A[K[]] ;load M ;load C[M[]] ;add them ;store A[K[]]
24
Programming Vec. Architectures
Compilers can provide feedback to programmers Programmers can provide hints to compiler
Dr. Shadrokh Samavi
25
SIMD Extensions
Media applications operate on data types narrower than the native word size
Example: disconnect carry chains to partition adder
Limitations, compared to vector instructions:
Number of data operands encoded into op code No sophisticated addressing modes (strided, scatter-gather) No mask registers
Dr. Shadrokh Samavi
26
SIMD Implementations
Implementations:
Intel MMX (1996)
Eight 8-bit integer ops or four 16-bit integer ops
Streaming SIMD Extensions (SSE) (1999)
Eight 16-bit integer ops Four 32-bit integer/fp ops or two 64-bit integer/fp ops
Advanced Vector Extensions (2010)
Four 64-bit integer/fp ops
Operands must be consecutive and aligned memory locations
Dr. Shadrokh Samavi
27
Example SIMD Code
Example DAXPY:
L.D MOV MOV MOV DADDIU L.4D MUL.4D L.4D ADD.4D S.4D DADDIU DADDIU DSUBU BNEZ F0,a F1, F0 F2, F0 F3, F0 R4,Rx,#512 F4,0[Rx] F4,F4,F0 F8,0[Ry] F8,F8,F4 0[Ry],F8 Rx,Rx,#32 Ry,Ry,#32 R20,R4,Rx R20,Loop ;load scalar a ;copy a into F1 for SIMD MUL ;copy a into F2 for SIMD MUL ;copy a into F3 for SIMD MUL ;last address to load ;load X[i], X[i+1], X[i+2], X[i+3] ;aX[i],aX[i+1],aX[i+2],aX[i+3] ;load Y[i], Y[i+1], Y[i+2], Y[i+3] ;aX[i]+Y[i], ..., aX[i+3]+Y[i+3] ;store into Y[i], Y[i+1], Y[i+2], Y[i+3] ;increment index to X ;increment index to Y ;compute bound ;check if done
Loop:
Dr. Shadrokh Samavi
28
Roofline Performance Model
Basic idea:
Plot peak floating-point throughput as a function of arithmetic intensity Ties together floating-point performance and memory performance for a target machine
Arithmetic intensity
Floating-point operations per byte read
Dr. Shadrokh Samavi
29
Arithmetical / operational intensity
Both can be measured in [Flops / Byte] Derivable the necessary bandwidth for the memory system Quotient of (achievable peak floating point) performance / (Operational , arithmetic intensity)
Achievable computing performance
Dr. Shadrokh Samavi
30
Example: Graphical presentation of the roofline model
Dr. Shadrokh Samavi
31
Roofline-Model for two generations ofAMD Opterons (X2 und X4)
Dr. Shadrokh Samavi
32
Examples
Attainable GFLOPs/sec Min = (Peak Memory BW Arithmetic Intensity, Peak Floating Point Perf.)
Dr. Shadrokh Samavi
33
Graphical Processing Units
Given the hardware invested to do graphics well, how can be supplement it to improve performance of a wider range of applications?
Basic idea:
Heterogeneous execution model
CPU is the host, GPU is the device
Develop a C-like programming language for GPU Unify all forms of GPU parallelism as CUDA thread Programming model is Single Instruction Multiple Thread
Dr. Shadrokh Samavi
34
System Architecture
Dr. Shadrokh Samavi
35
GPU Applications
Render triangles. NVIDIA GTX480 can render 1.6 billion triangles per second!
Dr. Shadrokh Samavi
36
General Purposed Computing
Dr. Shadrokh Samavi
ref: http://www.nvidia.com/object/tesla_computing_solutions.html
37
Single-Chip GPU v.s. Fastest Super Computers
ref: http://www.llnl.gov/str/JanFeb05/Seager.html
Dr. Shadrokh Samavi
38
Top500 Super Computer in June 2010
Dr. Shadrokh Samavi
39
GPU Will Top the List in Nov 2010
Dr. Shadrokh Samavi
40
The Gap Between CPU and GPU
Dr. Shadrokh Samavi
ref: Tesla GPU Computing Brochure
41
Threads and Blocks
A thread is associated with each data element Threads are organized into blocks Blocks are organized into a grid GPU hardware handles thread management, not applications or OS
Dr. Shadrokh Samavi
42
CUDA Programming
Massive number (>10000) of light-weight threads.
Dr. Shadrokh Samavi
43
NVIDIA GPU Architecture
Similarities to vector machines:
Works well with data-level parallel problems Scatter-gather transfers Mask registers Large register files
Differences:
No scalar processor Uses multithreading to hide memory latency Has many functional units, as opposed to a few deeply pipelined units like a vector processor
Dr. Shadrokh Samavi
44
Similarities and differences between multicore with Multimedia SIMD extensions and recent GPUs.
Dr. Shadrokh Samavi
45
Why accelerator technology
Felipe A. Cruz, University of Bristol, Bristol, United Kingdom
Dr. Shadrokh Samavi
46
Example
Multiply two vectors of length 8192
Code that works over all elements is the grid Thread blocks break this down into manageable sizes
512 threads per block
SIMD instruction executes 32 elements at a time Thus grid size = 16 blocks Block is analogous to a strip-mined vector loop with vector length of 32 Block is assigned to a multithreaded SIMD processor by the thread block scheduler Current-generation GPUs (Fermi) have 16 multithreaded SIMD processors
Dr. Shadrokh Samavi
47
Dr. Shadrokh Samavi
48
NVIDIA Fermi, 512 Processing Elements (PEs)
Dr. Shadrokh Samavi
49
Dr. Shadrokh Samavi
50
Dr. Shadrokh Samavi
51
Dr. Shadrokh Samavi
52
Dr. Shadrokh Samavi
53
Terminology
Threads of SIMD instructions
Each has its own PC Thread scheduler uses scoreboard to dispatch No data dependencies between threads! Keeps track of up to 48 threads of SIMD instructions
Hides memory latency
Thread block scheduler schedules blocks to SIMD processors Within each SIMD processor:
32 SIMD lanes Wide and shallow compared to vector processors
Dr. Shadrokh Samavi
54
Example
NVIDIA GPU has 32,768 registers
(32-bit)
Divided into lanes Each SIMD thread is limited to 64 registers SIMD thread has up to:
64 vector registers of 32 32-bit elements 32 vector registers of 32 64-bit elements
Fermi has 16 physical SIMD lanes, each containing 2048 registers
Dr. Shadrokh Samavi
55
CUDA
GPU (device) : _ device_ , _global_ System processor (host): _host_ Variables declared as _device_ or _global functions are allocated to the GPU Memory (global memory).
dimensions of the code (in blocks) dimensions of a block (in threads)
Dr. Shadrokh Samavi
56
NVIDIA Instruction Set Arch.
ISA is an abstraction of the hardware instruction set
Parallel Thread Execution (PTX) Uses virtual registers Translation to machine code is performed in software Example:
R8, blockIdx, 9 R8, R8, threadIdx RD0, [X+R8] RD2, [Y+R8] R0D, RD0, RD4 R0D, RD0, RD2 [Y+R8], RD0
shl.s32 add.s32 ld.global.f64 ld.global.f64 mul.f64 add.f64 st.global.f64
; Thread Block ID * Block size (512 or 29) ; R8 = i = my CUDA thread ID ; RD0 = X[i] ; RD2 = Y[i] ; Product in RD0 = RD0 * RD4 (scalar a) ; Sum in RD0 = RD0 + RD2 (Y[i]) ; Y[i] = sum (X[i]*a + Y[i])
Dr. Shadrokh Samavi
57
Conditional Branching
Like vector architectures, GPU branch hardware uses internal masks Also uses
Branch synchronization stack
Entries consist of masks for each SIMD lane i.e. which threads commit their results (all threads execute)
Instruction markers to manage when a branch diverges into multiple execution paths
Push on divergent branch
and when paths converge
Act as barriers Pops stack
Per-thread-lane 1-bit predicate register, specified by programmer
Dr. Shadrokh Samavi
58
Dr. Shadrokh Samavi
59
Example
if (X[i] != 0) X[i] = X[i] Y[i]; else X[i] = Z[i]; ld.global.f64 setp.neq.s32 @!P1, bra RD0, [X+R8] P1, RD0, #0 ELSE1, *Push ; RD0 = X[i] ; P1 is predicate register 1 ; Push old mask, set new mask bits ; if P1 false, go to ELSE1 ; RD2 = Y[i] ; Difference in RD0 ; X[i] = RD0 ; complement mask bits ; if P1 true, go to ENDIF1 ; RD0 = Z[i] ; X[i] = RD0 ; pop to restore old mask
ld.global.f64 sub.f64 st.global.f64 @P1, bra
ELSE1:
RD2, [Y+R8] RD0, RD0, RD2 [X+R8], RD0 ENDIF1, *Comp
ld.global.f64 RD0, [Z+R8] st.global.f64 [X+R8], RD0
ENDIF1: <next instruction>, *Pop
Dr. Shadrokh Samavi
60
NVIDIA GPU Memory Structures
Each SIMD Lane has private section of off-chip DRAM
Private memory Contains stack frame, spilling registers, and private variables
Each multithreaded SIMD processor also has local memory
Shared by SIMD lanes / threads within a block
Memory shared by SIMD processors is GPU Memory
Host can read and write GPU memory
Dr. Shadrokh Samavi
61
Fermi Architecture Innovations
Each SIMD processor has
Two SIMD thread schedulers, two instruction dispatch units 16 SIMD lanes (SIMD width=32, chime=2 cycles), 16 load-store units, 4 special function units Thus, two threads of SIMD instructions are scheduled every two clock cycles
Fast double precision Caches for GPU memory 64-bit addressing and unified address space Error correcting codes Faster context switching Faster atomic instructions
Dr. Shadrokh Samavi
62
Fermi Multithreaded SIMD Proc.
Dr. Shadrokh Samavi
63
Loop-Level Parallelism
Focuses on determining whether data accesses in later iterations are dependent on data values produced in earlier iterations
Loop-carried dependence
Example 1:
for (i=999; i>=0; i=i-1) x[i] = x[i] + s;
No loop-carried dependence
Dr. Shadrokh Samavi
64
Loop-Level Parallelism
Example 2:
for (i=0; i<100; i=i+1) { A[i+1] = A[i] + C[i]; /* S1 */ B[i+1] = B[i] + A[i+1]; /* S2 */ }
S1 and S2 use values computed by S1 in previous iteration S2 uses value computed by S1 in same iteration
Dr. Shadrokh Samavi
65
Loop-Level Parallelism
Example 3:
for (i=0; i<100; i=i+1) { A[i] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i]; /* S2 */ } S1 uses value computed by S2 in previous iteration but dependence is not circular so loop is parallel Transform to: A[0] = A[0] + B[0]; for (i=0; i<99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A[i+1] + B[i+1]; } B[100] = C[99] + D[99];
Dr. Shadrokh Samavi
66
Loop-Level Parallelism
Example 4: for (i=0;i<100;i=i+1) { A[i] = B[i] + C[i]; D[i] = A[i] * E[i]; }
Example 5: for (i=1;i<100;i=i+1) { Y[i] = Y[i-1] + Y[i]; }
recurrence.
Dr. Shadrokh Samavi
67
Finding dependencies
Assume indices are affine:
a x i + b (i is loop index)
Store to a x i + b, then Load from c x i + d i runs from m to n Dependence exists if:
Assume:
Given j, k such that m j n, m k n Store to a x j + b, load from a x k + d, and a x j + b =cxk+d
68
Dr. Shadrokh Samavi
Finding dependencies
Generally cannot determine at compile time Test for absence of a dependence:
GCD test:
If a dependency exists, GCD(c,a) must evenly divide (d-b)
Example:
for (i=0; i<100; i=i+1) { X[2*i+3] = X[2*i] * 5.0; }
Dr. Shadrokh Samavi
69
Finding dependencies
Example 2:
for (i=0; i<100; i=i+1) { Y[i] = X[i] / c; /* S1 */ X[i] = X[i] + c; /* S2 */ Z[i] = Y[i] + c; /* S3 */ Y[i] = c - Y[i]; /* S4 */ }
Watch for antidependencies and output dependencies
Dr. Shadrokh Samavi
70
Reductions
Reduction Operation: for (i=9999; i>=0; i=i-1) sum = sum + x[i] * y[i]; Transform to for (i=9999; i>=0; i=i-1) sum [i] = x[i] * y[i]; for (i=9999; i>=0; i=i-1) finalsum = finalsum + sum[i]; Do on p processors: for (i=999; i>=0; i=i-1) finalsum[p] = finalsum[p] + sum[i+1000*p]; Note: assumes associativity!
Dr. Shadrokh Samavi
71
Key features of the GPUs for mobile clients and servers. The Tegra 2 is the reference platform for Android OS and is found in the LG Optimus 2X cell phone.
Dr. Shadrokh Samavi
72
Dr. Shadrokh Samavi
73
Vector Program
Vector width is exposed to programmers. Scalar program Vector program (vector width of 8) float A[4][8]; float A[4][8]; do-all(i=0;i<4;i++){ do-all(j=0;j<8;j++){ do-all(i=0;i<4;i++){ A[i][j]++; movups xmm0, [ &A[i][0] ] } incps xmm0 } movups [ &A[i][0] ], xmm0 }
Dr. Shadrokh Samavi
74
CUDA Program
CUDA program expresses data level parallelism (DLP) in terms of thread level parallelism (TLP). Hardware converts TLP into DLP at run time. CUDA program Scalar program float A[4][8]; float A[4][8]; do-all(i=0;i<4;i++){ kernelF<<<(4,1),(8,1)>>>(A); do-all(j=0;j<8;j++){ A[i][j]++; __device__ kernelF(A){ } i = blockIdx.x; } j = threadIdx.x; A[i][j]++; }
Dr. Shadrokh Samavi
75
Two Levels of Thread Hierarchy
kernelF<<<(4,1),(8,1)>>>(A ); __device__ kernelF(A){ i = blockIdx.x; j = threadIdx.x; A[i][j]++; }
Dr. Shadrokh Samavi
76
Multi-dimension Thread and Block ID
Both grid and thread block can have two dimensional index.
kernelF<<<(2,2),(4,2)>>>(A); __device__ kernelF(A){ i = blockDim.x * blockIdx.y + blockIdx.x; j = threadDim.x * threadIdx.y + threadIdx.x; A[i][j]++; }
Dr. Shadrokh Samavi
77
Scheduling Thread Blocks on SM
Example: Scheduling 4 thread blocks on 3 SMs.
Dr. Shadrokh Samavi
78
Executing Thread Block on SM
kernelF<<<(2,2),(4,2)>>>(A); __device__ kernelF(A){ i = blockDim.x * blockIdx.y + blockIdx.x; j = threadDim.x * threadIdx.y + threadIdx.x; A[i][j]++; }
Notes: the number of Processing Elements (PEs) is transparent to programmer.
Executed on machine with width of 4:
Executed on machine with width of 8:
Dr. Shadrokh Samavi
79
Shared Memory and Synchronization
Example: average filter with 3x3 window
kernelF<<<(1,1),(16,16)>>>(A); __device__ kernelF(A){ __shared__ smem[16][16]; //allocate smem i = threadIdx.y; j = threadIdx.x; Image smem[i][j] = A[i][j]; __sync(); A[i][j] = ( smem[i-1][j-1] + smem[i-1][j] ... + smem[i+1][i+1] ) / 9; }
3x3 window on image
data in DRAM
Dr. Shadrokh Samavi
80
Shared Memory and Synchronization
Example: average filter over 3x3 window
kernelF<<<(1,1),(16,16)>>>(A); __device__ kernelF(A){ __shared__ smem[16][16]; i = threadIdx.y; j = threadIdx.x; smem[i][j] = A[i][j]; // load to smem __sync(); // thread wait at barrier A[i][j] = ( smem[i-1][j-1] + smem[i-1][j] ... + smem[i+1][i+1] ) / 9; }
3x3 window on image
Stage data in shared mem
Dr. Shadrokh Samavi
81
Shared Memory and Synchronization
Example: average filter over 3x3 window
kernelF<<<(1,1),(16,16)>>>(A); __device__ kernelF(A){ __shared__ smem[16][16]; i = threadIdx.y; j = threadIdx.x; smem[i][j] = A[i][j]; __sync(); // every thread is ready A[i][j] = ( smem[i-1][j-1] + smem[i-1][j] ... + smem[i+1][i+1] ) / 9; }
3x3 window on image
all threads finish the load
Dr. Shadrokh Samavi
82
Shared Memory and Synchronization
Example: average filter over 3x3 window
kernelF<<<(1,1),(16,16)>>>(A); __device__ kernelF(A){ __shared__ smem[16][16]; i = threadIdx.y; j = threadIdx.x; smem[i][j] = A[i][j]; __sync(); A[i][j] = ( smem[i-1][j-1] + smem[i-1][j] ... + smem[i+1][i+1] ) / 9; }
3x3 window on image
Start computation
Dr. Shadrokh Samavi
83
Example of Implementation
Note: NVIDIA may use a more complicated implementation.
Dr. Shadrokh Samavi
84
Example
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Assume warp 0 and warp 1 are scheduled for execution.
Dr. Shadrokh Samavi
85
Read Src Op
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Read source operands: r1 for warp 0 r4 for warp 1
Dr. Shadrokh Samavi
86
Buffer Src Op
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Push ops to op collector: r1 for warp 0 r4 for warp 1
Dr. Shadrokh Samavi
87
Read Src Op
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Read source operands: r2 for warp 0 r5 for warp 1
Dr. Shadrokh Samavi
88
Buffer Src Op
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Push ops to op collector: r2 for warp 0 r5 for warp 1
Dr. Shadrokh Samavi
89
Execute
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Compute the first 16 threads in the warp.
Dr. Shadrokh Samavi
90
Execute
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Compute the last 16 threads in the warp.
Dr. Shadrokh Samavi
91
Write back
Program Address: Inst 0x0004: add r0, r1, r2 0x0008: sub r3, r4, r5
Write back: r0 for warp 0 r3 for warp 1
Dr. Shadrokh Samavi
92