Pipelining
Can We Do Better than Microprogrammed Designs?
What limitations do you see with the multi-cycle design?
Limited concurrency
Some hardware resources are idle during different phases of instruction processing cycle “Fetch” logic is idle when an instruction is being “decoded” or “executed” Most of the datapath is idle when a memory access is happening
2
Can We Use the Idle Hardware to Improve Concurrency?
Goal: Concurrency throughput (more “work” completed in one cycle) Idea: When an instruction is using some resources in its processing phase, process other instructions on idle resources not needed by that instruction
E.g., when an instruction is being decoded, fetch the next instruction E.g., when an instruction is being executed, decode another instruction E.g., when an instruction is accessing data memory (ld/st), execute the next instruction E.g., when an instruction is writing its result into the register file, access data memory for the next instruction 3
Pipelining: Basic Idea
More systematically:
Pipeline the execution of multiple instructions Analogy: “Assembly line processing” of instructions
Idea:
Divide the instruction processing cycle into distinct “stages” of processing Ensure there are enough hardware resources to process one instruction in each stage Process a different instruction in each stage
Instructions consecutive in program order are processed in consecutive stages
Benefit: Increases instruction processing throughput (1/CPI) Downside: Start thinking about this…
4
Example: Execution of Four Independent ADDs
Multi-cycle: 4 cycles per instruction F
D
E
W F
D
E
W F
D
E
W F
D
E
W Time
Pipelined: 4 cycles per 4 instructions (steady state) F
D
E
W
F
D
E
W
F
D
E
F
D
Is life always this beautiful? W
E
W Time 5
The Laundry Analogy Time
6 PM
7
8
9
10
11
12
1
2 AM
Task order A B C D
“place one dirty load of clothes in the washer” “when the washer is finished, place the wet load in the dryer” “when the dryer is finished, take out the dry load and fold” “when folding is finished, ask your roommate (??) to put the clothes away”
- steps to do a load are sequentially dependent - no dependence between different loads - different steps do not share resources
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
6
Pipelining Multiple Loads of Laundry Time
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task order A B C D
Time
Task order A B C D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
- 4 loads of laundry in parallel - no additional resources - throughput increased by 4 - latency per load is the same 7
Pipelining Multiple Loads of Laundry: In Practice Time
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task order A B C D
Time
Task order A B C D
the slowest step decides throughput Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
8
Pipelining Multiple Loads of Laundry: In Practice Time
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task order A B C D
Time
Task order A B C D
A B A B
Throughput restored (2 loads per hour) using 2 dryers 9 Pipelining is all about overlapping latencies
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
An Ideal Pipeline
Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing) Repetition of identical operations
Repetition of independent operations
No dependencies between repeated operations
Uniformly partitionable suboperations
The same operation is repeated on a large number of different inputs
Processing can be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry
What about the instruction processing “cycle”?
10
Ideal Pipelining combinational logic (F,D,E,M,W) T psec
T/2 ps (F,D,E)
T/3 ps (F,D)
BW=~(1/T)
BW=~(2/T)
T/2 ps (M,W)
T/3 ps (E,M)
T/3 ps (M,W)
BW=~(3/T)
11
More Realistic Pipeline: Throughput
Nonpipelined version with delay T BW = 1/(T+S) where S = latch delay
T ps
k-stage pipelined version BWk-stage = 1 / (T/k +S ) BWmax = 1 / (1 gate delay + S ) T/k ps
T/k ps 12
More Realistic Pipeline: Cost
Nonpipelined version with combinational cost G Cost = G+L where L = latch cost
G gates
k-stage pipelined version Costk-stage = G + Lk
G/k
G/k
13
Pipelining Instruction Processing
14
Remember: The Instruction Processing Cycle
Fetch fetch (IF) 1. Instruction Decodedecode and 2. Instruction register operand fetch (ID/RF) Evaluate Address 3. Execute/Evaluate memory address (EX/AG) Fetch Operands 4. Memory operand fetch (MEM) Execute 5. Store/writeback result (WB) Store Result
15
Remember the Single-Cycle Uarch Instruction [25– 0] 26
Shift left 2
Jump address [31– 0] 28
PC+4 [31– 28] ALU Add result Add 4 Instruction [31– 26]
Control
Instruction [25– 21] PC
Read address Instruction [31– 0] Instruction memory
M u x
M u x
1
0
Shift left 2
RegDst Jump Branch MemRead MemtoReg ALUOp MemWrite ALUSrc RegWrite
PCSrc2=Br Taken
Read register 1
Instruction [20– 16]
Instruction [15– 11]
PCSrc 1 1=Jump
0
0 M u x 1
Read data 1 Read register 2 Registers Read Write data 2 register
0 M u x 1
Write data
Zero ALU ALU result
Read data
Address
Write
Data memory
bcond data Instruction [15– 0]
16
Sign extend
1 M u x 0
32 ALU control
Instruction [5– 0]
ALU operation
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T
BW=~(1/T) 16
Dividing Into Stages 200ps
100ps
IF: Instruction fetch
ID: Instruction decode/ register file read
200ps
EX: Execute/ address calculation
200ps
100ps
MEM: Memory access
0 M u x 1
WB: Write back
ignore for now
Add Add
4
Add result
Shift left 2
PC
Read register 1
Address
Instruction Instruction memory
Read data 1 Read register 2 Registers Read Write data 2 register Write data
0 M u x 1
Zero ALU ALU result
Address Data memory Write data
16
Sign extend
Read data
1 M u x 0
RF write
32
Is this the correct partitioning? – Not balanced (Balancing is difficult) Why not 4 or 6 stages? Why not different boundaries? Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
17
Instruction Pipeline Throughput Program execution Time order (in instructions) lw $1, 100($0)
2004
2
Instruction Reg fetch
400 6
ALU
600 8 800
Data access
12 1200
14 1400
16 1600
18 1800
Reg Instruction Reg fetch
8 ns 800ps
lw $2, 200($0)
101000
lw $3, 300($0)
Data access
ALU
Reg Instruction fetch
8800ps ns
... 8 ns 800ps
Program execution Time order (in instructions) lw $1, 100($0) lw $2, 200($0) lw $3, 300($0)
200 4
2
Instruction fetch
2 ns 200ps
400 6 600
Reg
ALU
Instruction fetch
2 ns 200ps
Reg
Instruction fetch
2 ns 200ps
8 800
Data access
ALU
Reg
2 ns 200ps
1000 10
1200 12
1400 14
Reg Data access
Reg
ALU
Data access
2200ps ns
2 ns 200ps
Reg
2 ns 200ps
5-stage speedup is 4, not 5 as predicted by the ideal model. Why? Raw latency has been increased for every instruction, downside of not balancing 18
Enabling Pipelined Processing: Pipeline Registers IF: Instruction fetch
ID: Instruction decode/ register file read
WB: Write back
ID/EX
PCE+4
EX/MEM
MEM/WB
nPCM
IF/ID
PCD+4
44
MEM: Memory access
No resource is used by more than 1 stage!
00 M M u u xx 11
Add Add
EX: Execute/ address calculation
Add Add Add Add result result
1616
3232 Sign Sign extend extend
T
Write Write data data
Read Read data data
MDRW
AoutM
AE
Data Data memory memory
T/k ps Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Address Address
11 M M u u xx 00
AoutW
Write Write data data
00 M M u u xx 1 1
Zero Zero ALU ALU ALU ALU result result
BM
Instruction memory
Read Read data data 11 Read Read register register 22 Registers Read Registers Read Write Write data data 22 register register
BE
Instruction Instruction memory
Read Read register register 11
ImmE
Address Address
Instruction
PCPC
IRD
PCF
Shift Shift leftleft 22
T/k ps
19
Pipelined Operation Example lw Instruction fetch
All instruction classes must follow the same path and timing through the Any performance impact? lw pipeline stages.
00 00 0 M M M M u u u u xxxxx 11 11
Instruction decode
lw
lw
Execution
Memory
ID/EX ID/EX ID/EX ID/EX
IF/ID IF/ID IF/ID IF/ID IF/ID
lw Write back
MEM/WB MEM/WB MEM/WB MEM/WB
EX/MEM EX/MEM EX/MEM EX/MEM EX/MEM
Add Add Add Add Add Add Add Add Add result Add Add result result result
444
PC PC PC
Address Address Address Instruction Instruction Instruction Instruction memory memory memory
Instruction Instruction Instruction Instruction Instruction
Shift Shift Shift Shift left 22 left22 left left Read Read Read register register111 register
Read Read Read Read data data111 data Read data Read Read Read register register register222 Registers Read Registers Read Registers Read Read Write Write data Write data222 data data register register register register Write Write Write data data data
16 16 16
00 00 M M M M u u u u xxxx 1 111
Zero Zero Zero Zero ALU ALU ALU ALU ALU ALU ALU ALU ALU result result result result
Address Address Address Address Address Data Data Data Data Data memory memory memory memory memory Write Write Write Write data data data
Read Read Read Read data data data data
11 11 M M M M u u u u x xx 0 0 00
32 32 32 Sign Sign Sign extend extend extend
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
20
Pipelined Operation Example lw $10, sub $11,20($1) $2, $3 Instruction fetch
lw $10, sub $11,20($1) $2, $3
lw $10, 20($1)
Instruction decode
Execution sub $11, $2, $3
0 00 M M u u u xx 11
lw $10, sub $11,20($1) $2, $3 Memory Memory
Execution
ID/EX ID/EX
IF/ID IF/ID
EX/MEM EX/MEM
sub $11,20($1) $2, $3 lw $10, Write back back Write
MEM/WB MEM/WB
Add Add Add Add Add Add Add Add result result result
4 44
PC PC PC
Address Address Address Instruction Instruction memory memory
Instruction Instruction
Shift Shift left 22 left
Read Read Read register 11 register
Read Read data 11 data Read Read Read register 22 register Registers Read Registers Read Read Write Write Write data 22 data register register
00 M M u u xx 11
Zero Zero ALU ALU ALU ALU ALU result result
Is life always this beautiful? Write Write data data
16 16 16
Address Address Address Data Data Data memory memory
Write Write Write data data data
Read Read data data
11 M M u u xxx 0 00
32 32 32 Sign Sign extend extend extend
Clock Clock 56 21 43 Clock Clock Clock
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
21
Illustrating Pipeline Operation: Operation View t0
t1
t2
t3
t4
Inst0 IF Inst1 Inst2 Inst3 Inst4
ID IF
EX ID IF
MEM EX ID IF
WB MEM EX ID IF
t5 WB MEM EX ID IF
WB MEM EX ID IF
WB MEM EX ID IF 22
Illustrating Pipeline Operation: Resource View t0 IF ID EX MEM WB
I0
t1
t2
t3
t4
t5
t6
t7
t8
t9
t10
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I0
I1
I2
I3
I4
I5
I6
I7
I8
I9
I0
I1
I2
I3
I4
I5
I6
I7
I8
I0
I1
I2
I3
I4
I5
I6
I7
I0
I1
I2
I3
I4
I5
I6 23
Control Points in a Pipeline PCSrc
0 M u x 1
IF/ID
ID/EX
EX/MEM
MEM/WB
Add Add result
Add
4
Branch Shift left 2
PC
Address Instruction memory
Instruction
RegWrite
Read register 1
MemWrite Read data 1
Read register 2 Registers Read Write data 2 register Write data
ALUSrc
0 M u x 1
Zero Zero ALU ALU result
MemtoReg Address Data memory Write
Read data
1 M u x 0
data Instruction 16 [15– 0]
Sign extend
32
6
ALU control
MemRead
Instruction [20– 16] Instruction [15– 11]
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
0 M u x 1
ALUOp
RegDst
Identical set of control points as the single-cycle datapath!!
24
Control Signals in a Pipeline
For a given instruction ⇒
same control signals as single-cycle, but control signals required at different cycles, depending on stage decode once using the same logic as single-cycle and buffer control signals until consumed WB Instruction
IF/ID
⇒
Control
M
WB
EX
M
WB
ID/EX
EX/MEM
MEM/WB
or carry relevant “instruction word/field” down the pipeline and decode locally within each or in a previous stage Which one is better?
25
Pipelined Control Signals PCSrc
ID/EX
0 M u x 1
WB Control
IF/ID
EX/MEM
M
WB
EX
M
MEM/WB WB
Add Add Add result
Instruction memory
ALUSrc
Read register 1
Read data 1 Read register 2 Registers Read Write data 2 register Write data
Zero ALU ALU result
0 M u x 1
MemtoReg
Address
Branch
Shift left 2
MemWrite
PC
Instruction
RegWrite
4
Address Data memory
Read data
Write data Instruction 16 [15– 0]
Instruction [20– 16]
Instruction [15– 11]
Sign extend
32
6
ALU control
0 M u x 1
1 M u x 0
MemRead
ALUOp
RegDst
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
26
An Ideal Pipeline
Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing) Repetition of identical operations
Repetition of independent operations
No dependencies between repeated operations
Uniformly partitionable suboperations
The same operation is repeated on a large number of different inputs
Processing an be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry
What about the instruction processing “cycle”?
27
Instruction Pipeline: Not An Ideal Pipeline
Identical operations ... NOT! ⇒ different instructions do not need all stages - Forcing different instructions to go through the same multi-function pipe external fragmentation (some pipe stages idle for some instructions)
Uniform suboperations ... NOT! ⇒ difficult to balance the different pipeline stages - Not all pipeline stages do the same amount of work internal fragmentation (some pipe stages are too fast but all take the same clock cycle time)
Independent operations ... NOT! ⇒ instructions are not independent of each other - Need to detect and resolve inter-instruction dependencies to ensure the pipeline operates correctly Pipeline is not always moving (it stalls) 28
Issues in Pipeline Design
Balancing work in pipeline stages
How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the presence of events that disrupt pipeline flow
Handling dependences
Data Control
Handling resource contention Handling long-latency (multi-cycle) operations
Handling exceptions, interrupts
Advanced: Improving pipeline throughput
Minimizing stalls
29
Causes of Pipeline Stalls
Resource contention
Dependences (between instructions)
Data Control
Long-latency (multi-cycle) operations
30
Dependences and Their Types
Also called “dependency” or less desirably “hazard” Dependencies dictate ordering requirements between instructions Two types
Data dependence Control dependence
Resource contention is sometimes called resource dependence
However, this is not fundamental to (dictated by) program semantics, so we will treat it separately 31
Handling Resource Contention
Happens when instructions in two pipeline stages need the same resource Solution 1: Eliminate the cause of contention
Duplicate the resource or increase its throughput
E.g., use separate instruction and data memories (caches) E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of the contending stages
Which stage do you stall? Example: What if you had a single read and write port for the register file? 32
Data Dependences
Types of data dependences
Flow dependence (true data dependence – read after write) Output dependence (write after write) Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?
For all of them, we need to ensure semantics of the program is correct Flow dependences always need to be obeyed because they constitute true dependence on a value Anti and output dependences exist due to limited number of architectural registers
They are dependence on a name, not a value We will later see what we can do about them
33
Data Dependence Types Flow dependence ← r1 op r2 r3 ← r3 op r4 r5
Read-after-Write (RAW)
Anti dependence r3 ← r1 op r2 ← r4 op r5 r1
Write-after-Read (WAR)
Output-dependence ← r1 op r2 r3 ← r3 op r4 r5 ← r6 op r7 r3
Write-after-Write (WAW) 34
Pipelined Operation Example lw $10, sub $11,20($1) $2, $3 Instruction fetch
lw $10, sub $11,20($1) $2, $3
lw $10, 20($1)
Instruction decode
Execution sub $11, $2, $3
0 00 M M u u u xx 11
lw $10, sub $11,20($1) $2, $3 Memory Memory
Execution
ID/EX ID/EX
IF/ID IF/ID
EX/MEM EX/MEM
sub $11,20($1) $2, $3 lw $10, Write back back Write
MEM/WB MEM/WB
Add Add Add Add Add Add Add Add result result result
4 44
PC PC PC
Address Address Address Instruction Instruction memory memory
Instruction Instruction
Shift Shift left 22 left
Read Read Read register 11 register
Read Read data 11 data Read Read Read register 22 register Registers Read Registers Read Read Write Write Write data 22 data register register
00 M M u u xx 11
Zero Zero ALU ALU ALU ALU ALU result result
Address Address Address Data Data Data memory memory
What if the SUB were dependent on LW? Write Write data data
16 16 16
Write Write Write data data data
Read Read data data
11 M M u u xxx 0 00
32 32 32 Sign Sign extend extend extend
Clock Clock 56 21 43 Clock Clock Clock
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
35
Data Dependence Handling
36
Readings for Next Few Lectures
P&H Chapter 4.9-4.11 Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995
More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts
37
How to Handle Data Dependences
Anti and output dependences are easier to handle write to the destination in one stage and in program order No problem unless reordered Flow dependences are more interesting Five fundamental ways of handling flow dependences Detect and wait until value is available in register file Detect and forward/bypass data to dependent instruction
Detect and eliminate the dependence at the software level
No need for the hardware to detect dependence
Predict the needed value(s), execute “speculatively”, and verify
Dependent instruction can progress till it needs the value
Loading an array initialized to 0 [hardware table]
Do something else (fine-grained multithreading)
Every cycle, fetch from a different thread [multiple PCs, register files..] Fetch stage has multiple PCs and a MUX, no two instances of the same thread No need to detect
38
Interlocking
Detection of dependence between instructions in a pipelined processor to guarantee correct execution
Software based interlocking vs. Hardware based interlocking
MIPS acronym?
39
Approaches to Dependence Detection (I)
Scoreboarding
Each register in register file has a Valid bit associated with it An instruction that is writing to the register resets the Valid bit An instruction in Decode stage checks if all its source and destination registers are Valid
Advantage:
Yes: No need to stall… No dependence No: Stall the instruction
Simple. 1 bit per register
Disadvantage:
Need to stall for all types of dependences, not only flow dep. 40
Not Stalling on Anti and Output Dependences
What changes would you make to the scoreboard to enable this?
41
Approaches to Dependence Detection (II)
Combinational dependence check logic
Advantage:
Special logic that checks if any instruction in later stages is supposed to write to any source register of the instruction that is being decoded Yes: stall the instruction/pipeline No: no need to stall… no flow dependence
No need to stall on anti and output dependences
Disadvantage:
Logic is more complex than a scoreboard Logic becomes more complex as we make the pipeline deeper and wider (flash-forward: think superscalar execution)
42
Once You Detect the Dependence in Hardware
What do you do afterwards? Observation: Dependence between two instructions is detected before the communicated data value becomes available Option 1: Stall the dependent instruction right away Option 2: Stall the dependent instruction only when necessary data forwarding/bypassing Option 3: …
43
Data Forwarding/Bypassing
Problem: A consumer (dependent) instruction has to wait in decode stage until the producer instruction writes its value in the register file Goal: We do not want to stall the pipeline unnecessarily Observation: The data value needed by the consumer instruction can be supplied directly from a later stage in the pipeline (instead of only from the register file) Idea: Add additional dependence check logic and data forwarding paths (buses) to supply the producer’s value to the consumer right after the value is available Benefit: Consumer can move in the pipeline until the point the value can be supplied less stalling
44
A Special Case of Data Dependence
Control dependence
Data dependence on the Instruction Pointer / Program Counter
45
Control Dependence
Question: What should the fetch PC be in the next cycle? Answer: The address of the next instruction All instructions are control dependent on previous ones. Why? If the fetched instruction is a non-control-flow instruction: Next Fetch PC is the address of the next-sequential instruction Easy to determine if we know the size of the fetched instruction If the instruction that is fetched is a control-flow instruction: How do we determine the next Fetch PC? In fact, how do we know whether or not the fetched instruction is a control-flow instruction? [Pre-decoded Icache]
46
Data and Control Dependence Handling
Readings for Next Few Lectures
P&H Chapter 4.9-4.11 Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995
More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts
McFarling, “Combining Branch Predictors,” DEC WRL Technical Report, 1993. Kessler, “The Alpha 21264 Microprocessor,” IEEE Micro 1999. 48
Data Dependence Handling: More Depth & Implementation
49
Remember: Data Dependence Types Flow dependence ← r1 op r2 r3 ← r3 op r4 r5
Read-after-Write (RAW)
Anti dependence r3 ← r1 op r2 ← r4 op r5 r1
Write-after-Read (WAR)
Output-dependence ← r1 op r2 r3 ← r3 op r4 r5 ← r6 op r7 r3
Write-after-Write (WAW) 50
How to Handle Data Dependences
Anti and output dependences are easier to handle
write to the destination in one stage and in program order
Flow dependences are more interesting
Five fundamental ways of handling flow dependences
Detect and wait until value is available in register file Detect and forward/bypass data to dependent instruction Detect and eliminate the dependence at the software level
No need for the hardware to detect dependence
Predict the needed value(s), execute “speculatively”, and verify Do something else (fine-grained multithreading)
No need to detect
51
RAW Dependence Handling
Following flow dependences lead to conflicts in the 5-stage pipeline
addi
ra r- -
addi
r- ra -
addi
r- ra -
addi
r- ra -
addi
r- ra -
addi
r- ra -
IF
ID
EX
MEM WB
IF
ID
EX
MEM WB
IF
ID
EX
MEM
IF
ID
EX
IF
?ID IF 52
Register Data Dependence Analysis R/I-Type
LW
SW
Br
read RF
read RF
read RF
read RF
write RF
write RF
J
Jr
IF ID
read RF
EX MEM WB
For a given pipeline, when is there a potential conflict between 2 data dependent instructions?
dependence type: RAW, WAR, WAW? instruction types involved? distance between the two instructions?
53
Safe and Unsafe Movement of Pipeline j:_←rk
stage X Reg Read
iOj
i:rk←_
j:rk←_
Reg Write
iAj stage Y Reg Write
RAW Dependence
i:_←rk
j:rk←_
Reg Write
iDj
Reg Read
WAR Dependence
i:rk←_
Reg Write
WAW Dependence
dist(i,j) ≤ dist(X,Y) ⇒ Unsafe ?? to keep j moving dist(i,j) > dist(X,Y) ⇒ Safe ??
54
RAW Dependence Analysis Example R/I-Type
LW
SW
Br
read RF
read RF
read RF
read RF
write RF
write RF
J
Jr
IF ID
read RF
EX MEM WB
Instructions IA and IB (where IA comes before IB) have RAW dependence iff
IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
What about WAW and WAR dependence? What about memory data dependence? 55
Pipeline Stall: Resolving Data Dependence t0 Insth IF i Insti Instj Instk Instl i: rx ← _ j: _ ← rx bubble j: _ ← rx bubble j: _ ← rx bubble j: _ ← rx
t1
t2
t3
t4
ID IF
ALU ID IF
MEM ALU ID IF
WB MEM ALU ID ID IF IF
j
t5 WB MEM ALU ID ALU ID IF ID IF IF
ID WB MEM ALU WB MEM ALU MEM ID ALU ID IF WB MEM ALU ALU IF ID IF MEM ALU ID ID IF ALU ID IF IF ID IF dist(i,j)=1 Stall==make the dependent instruction IF dist(i,j)=2wait until its source data value is available dist(i,j)=3 1. stop all up-stream stages dist(i,j)=4 2. drain all down-stream stages 56
How to Implement Stalling PCSrc
ID/EX
0 M u x 1
WB Control
IF/ID
EX/MEM
M
WB
EX
M
MEM/WB WB
Add Add Add result
Instruction memory
ALUSrc
Read register 1
Read data 1 Read register 2 Registers Read Write data 2 register Write data
Zero ALU ALU result
0 M u x 1
MemtoReg
Address
Branch
Shift left 2
MemWrite
PC
Instruction
RegWrite
4
Address Data memory
Read data
Write data Instruction 16 [15– 0]
Sign extend
Instruction [20– 16]
Instruction [15– 11]
Stall
32
6
ALU control
0 M u x 1
1 M u x 0
MemRead
ALUOp
RegDst
disable PC and IR latching; ensure stalled instruction stays in its stage Insert “invalid” instructions/nops into the stage following the stalled one Valid bit in the pipelined register gated with the subsequent stages (all logic which updates the state) / Control logic issues a nop instruction
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
57
Stall Conditions
Instructions IA and IB (where IA comes before IB) have RAW dependence iff
IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
In other words, must stall when IB in ID stage wants to read a register to be written by IA in EX, MEM or WB stage
58
Stall Conditions
Helper functions
Stall when
rs(I) returns the rs field of I use_rs(I) returns true if I requires RF[rs] and rs!=r0 (rs(IRID)==destEX) && use_rs(IRID) && RegWriteEX or (rs(IRID)==destMEM) && use_rs(IRID) && RegWriteMEM (rs(IRID)==destWB) && use_rs(IRID) && RegWriteWB or (rt(IRID)==destEX) && use_rt(IRID) && RegWriteEX or (rt(IRID)==destMEM) && use_rt(IRID) && RegWriteMEM (rt(IRID)==destWB) && use_rt(IRID) && RegWriteWB
or
or
It is crucial that the EX, MEM and WB stages continue to advance normally during stall cycles 59
Impact of Stall on Performance
Each stall cycle corresponds to one lost cycle in which no instruction can be completed For a program with N instructions and S stall cycles, Average CPI=(N+S)/N S depends on
frequency of RAW dependences exact distance between the dependent instructions distance between dependences suppose i1,i2 and i3 all depend on i0, once i1’s dependence is resolved, i2 and i3 must be okay too 60
Sample Assembly (P&H)
for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... } for2tst:
exit2:
addi slti bne sll add lw lw slt beq ......... addi j
$s1, $s0, -1 $t0, $s1, 0 $t0, $zero, exit2 $t1, $s1, 2 $t2, $a0, $t1 $t3, 0($t2) $t4, 4($t2) $t0, $t4, $t3 $t0, $zero, exit2
3 stalls 3 stalls 3 stalls 3 stalls 3 stalls 3 stalls
$s1, $s1, -1 for2tst 61
Data Forwarding (or Data Bypassing)
It is intuitive to think of RF as state
But, RF is just a part of a communication abstraction
“add rx ry rz” literally means get values from RF[ry] and RF[rz] respectively and put result in RF[rx] “add rx ry rz” means 1. get the results of the last instructions to define the values of RF[ry] and RF[rz], respectively, and 2. until another instruction redefines RF[rx], younger instructions that refer to RF[rx] should use this instruction’s result
What matters is to maintain the correct “dataflow” between operations, thus
add
ra r- r-
addi
r- ra r-
IF
ID
EX
MEM WB
IF
ID
ID EX
ID MEM ID WB 62
Resolving RAW Dependence with Forwarding
Instructions IA and IB (where IA comes before IB) have RAW dependence iff
IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
In other words, if IB in ID stage reads a register written by IA in EX, MEM or WB stage, then the operand required by IB is not yet in RF ⇒ retrieve operand from datapath instead of the RF ⇒ retrieve operand from the youngest definition if multiple definitions are outstanding
63
Data Forwarding Paths (v1) EX/MEM
ID/EX
MEM/WB
dist(i,j)=3 M u x Registers ForwardA M u x
internal forward?
Rs Rt Rt Rd
ALU
dist(i,j)=1
dist(i,j)=2 Data memory
M u x
ForwardB M u x
EX/MEM.RegisterRd Forwarding unit
MEM/WB.RegisterRd
dist(i,j)=3 b. With forwarding [Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
64
Data Forwarding Paths (v2) EX/MEM
ID/EX
MEM/WB
dist(i,j)=3 M u x Registers ForwardA M u x
Rs Rt Rt Rd
ALU
dist(i,j)=1
dist(i,j)=2 Data memory
M u x
ForwardB M u x
EX/MEM.RegisterRd Forwarding unit
MEM/WB.RegisterRd
b. With forwarding [Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
65 Assumes RF forwards internally
Data Forwarding Logic (for v2) if (rsEX!=0) && (rsEX==destMEM) && RegWriteMEM then forward operand from MEM stage // dist=1 else if (rsEX!=0) && (rsEX==destWB) && RegWriteWB then forward operand from WB stage // dist=2 else use AEX (operand from register file) // dist >= 3 Ordering matters!! Must check youngest match first Why doesn’t use_rs( ) appear in the forwarding logic? What does the above not take into account? 66
Data Forwarding (Dependence Analysis) R/I-Type
LW
SW
Br
J
Jr
IF ID EX MEM
use use produce
use
use
produce
(use)
use
WB
Even with data-forwarding, RAW dependence on an immediately preceding LW instruction requires a stall 67
Sample Assembly, No Forwarding (P&H)
for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... } for2tst:
exit2:
addi slti bne sll add lw lw slt beq ......... addi j
$s1, $s0, -1 $t0, $s1, 0 $t0, $zero, exit2 $t1, $s1, 2 $t2, $a0, $t1 $t3, 0($t2) $t4, 4($t2) $t0, $t4, $t3 $t0, $zero, exit2
3 stalls 3 stalls 3 stalls 3 stalls 3 stalls 3 stalls
$s1, $s1, -1 for2tst 68
Sample Assembly, Revisited (P&H)
for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... } addi $s1, $s0, -1 for2tst: slti $t0, $s1, 0 bne $t0, $zero, exit2 sll $t1, $s1, 2 add $t2, $a0, $t1 lw $t3, 0($t2) lw $t4, 4($t2) nop slt $t0, $t4, $t3 beq $t0, $zero, exit2 ......... addi $s1, $s1, -1 j for2tst exit2:
69
Pipelining the LC-3b
70
Pipelining the LC-3b
Let’s remember the single-bus datapath
We’ll divide it into 5 stages
Fetch Decode/RF Access Address Generation/Execute Memory Store Result
Conservative handling of data and control dependences
Stall on branch Stall on flow dependence
71
An Example LC-3b Pipeline
73
74
75
76
77
78
Control of the LC-3b Pipeline
Three types of control signals
Datapath Control Signals
Control Store Signals
Control signals that control the operation of the datapath
Control signals (microinstructions) stored in control store to be used in pipelined datapath (can be propagated to stages later than decode)
Stall Signals
Ensure the pipeline operates correctly in the presence of dependencies 79
80
Control Store in a Pipelined Machine
81
Stall Signals
Pipeline stall: Pipeline does not move because an operation in a stage cannot complete Stall Signals: Ensure the pipeline operates correctly in the presence of such an operation Why could an operation in a stage not complete?
82
Pipelined LC-3b
http://www.ece.cmu.edu/~ece447/s14/lib/exe/fetch.php?m edia=18447-lc3b-pipelining.pdf
83
End of Pipelining the LC-3b
84
Questions to Ponder
What is the role of the hardware vs. the software in data dependence handling?
Software based interlocking Hardware based interlocking Who inserts/manages the pipeline bubbles? Who finds the independent instructions to fill “empty” pipeline slots? What are the advantages/disadvantages of each?
85
Questions to Ponder
What is the role of the hardware vs. the software in the order in which instructions are executed in the pipeline?
Software based instruction scheduling static scheduling Hardware based instruction scheduling dynamic scheduling
86
More on Software vs. Hardware
Software based scheduling of instructions static scheduling
Compiler orders the instructions, hardware executes them in that order Contrast this with dynamic scheduling (in which hardware will execute instructions out of the compiler-specified order) How does the compiler know the latency of each instruction?
What information does the compiler not know that makes static scheduling difficult?
Answer: Anything that is determined at run time
Variable-length operation latency, memory addr, branch direction
How can the compiler alleviate this (i.e., estimate the unknown)?
Answer: Profiling
87
Control Dependence Handling
88
Review: Control Dependence
Question: What should the fetch PC be in the next cycle? Answer: The address of the next instruction
If the fetched instruction is a non-control-flow instruction:
Next Fetch PC is the address of the next-sequential instruction Easy to determine if we know the size of the fetched instruction
If the instruction that is fetched is a control-flow instruction:
All instructions are control dependent on previous ones. Why?
How do we determine the next Fetch PC?
In fact, how do we even know whether or not the fetched instruction is a control-flow instruction?
89
Branch Types Type
Direction at fetch time
Number of When is next possible next fetch address fetch addresses? resolved?
Conditional
Unknown
2
Execution (register dependent)
Unconditional
Always taken
1
Decode (PC + offset)
Call
Always taken
1
Decode (PC + offset)
Return
Always taken
Many
Execution (register dependent)
Indirect
Always taken
Many
Execution (register dependent)
Different branch types can be handled differently
90
How to Handle Control Dependences
Critical to keep the pipeline full with correct sequence of dynamic instructions. Potential solutions if the instruction is a control-flow instruction: Stall the pipeline until we know the next fetch address Guess the next fetch address (branch prediction) Employ delayed branching (branch delay slot) Do something else (fine-grained multithreading) Eliminate control-flow instructions (predicated execution) Fetch from both possible paths (if you know the addresses of both possible paths) (multipath execution)
91
Stall Fetch Until Next PC is Available: Good Idea?
Insth Insti Instj Instk Instl
t0
t1
IF
ID IF
t2
t3
ALU MEM IF ID IF
t4
t5
WB ALU MEM IF ID IF
WB ALU IF
92 This is the case with non-control-flow and unconditional br instructions!
Doing Better than Stalling Fetch …
Rather than waiting for true-dependence on PC to resolve, just guess nextPC = PC+4 to keep fetching every cycle Is this a good guess? What do you lose if you guessed incorrectly? ~20% of the instruction mix is control flow
~50 % of “forward” control flow (i.e., if-then-else) is taken ~90% of “backward” control flow (i.e., loop back) is taken Overall, typically ~70% taken and ~30% not taken [Lee and Smith, 1984]
Expect “nextPC = PC+4” ~86% of the time, but what about the remaining 14%? 93
Guessing NextPC = PC + 4
Always predict the next sequential instruction is the next instruction to be executed This is a form of next fetch address prediction and branch prediction How can you make this more effective? Idea: Maximize the chances that the next sequential instruction is the next instruction to be executed
Software: Lay out the control flow graph such that the “likely next instruction” is on the not-taken path of a branch Hardware: ??? (how can you do this in hardware…) 94
Guessing NextPC = PC + 4
How else can you make this more effective? Idea: Get rid of control flow instructions (or minimize their occurrence) How?
1. Get rid of unnecessary control flow instructions combine predicates (predicate combining) 2. Convert control dependences into data dependences predicated execution
95
Predicate Combining (not Predicated Execution)
Complex predicates are converted into multiple branches
if ((a == b) && (c < d) && (a > 5000)) { … }
3 conditional branches
Problem: This increases the number of control dependencies Idea: Combine predicate operations to feed a single branch instruction instead of having one branch for each
Predicates stored and operated on using condition registers A single branch checks the value of the combined predicate
+ Fewer branches in code fewer mipredictions/stalls -- Possibly unnecessary work
-- If the first predicate is false, no need to compute other predicates Condition registers exist in IBM RS6000 and the POWER architecture 96
Predicated Execution
Idea: Convert control dependence to data dependence
Suppose we had a Conditional Move instruction…
CMOV condition, R1 R2 R1 = (condition == true) ? R2 : R1 Employed in most modern ISAs (x86, Alpha)
Code example with branches vs. CMOVs if (a == 5) {b = 4;} else {b = 3;} CMPEQ condition, a, 5; CMOV condition, b 4; CMOV !condition, b 3; 97
Conditional Execution in ARM
Same as predicated execution
Every instruction is conditionally executed
98
Predicated Execution
Eliminates branches enables straight line code (i.e., larger basic blocks in code) Advantages
Always-not-taken prediction works better (no branches) Compiler has more freedom to optimize code (no branches)
Disadvantages
control flow does not hinder inst. reordering optimizations code optimizations hindered only by data dependencies
Useless work: some instructions fetched/executed but discarded (especially bad for easy-to-predict branches) Requires additional ISA support
Can we eliminate all branches this way?
99
Predicated Execution
We will get back to this…
Some readings (optional):
Allen et al., “Conversion of control dependence to data dependence,” POPL 1983. Kim et al., “Wish Branches: Combining Conditional Branching and Predication for Adaptive Predicated Execution,” MICRO 2005.
100