Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
18 views196 pages

4 Sequential Logic Afterlecture

The lecture covers combinational logic design, including basic logic gates, multiplexers, and arithmetic logic units, emphasizing logical completeness and simplification techniques using Boolean algebra. It highlights the importance of tri-state buffers and their applications in circuit design, as well as introduces upcoming topics on sequential logic and hardware description languages. Required readings from specific chapters are assigned to enhance understanding of the material discussed.

Uploaded by

1316312987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views196 pages

4 Sequential Logic Afterlecture

The lecture covers combinational logic design, including basic logic gates, multiplexers, and arithmetic logic units, emphasizing logical completeness and simplification techniques using Boolean algebra. It highlights the importance of tri-state buffers and their applications in circuit design, as well as introduces upcoming topics on sequential logic and hardware description languages. Required readings from specific chapters are assigned to enhance understanding of the material discussed.

Uploaded by

1316312987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

Digital Design & Computer Arch.

Lecture 4: Sequential Logic Design

Prof. Onur Mutlu

ETH Zürich
Spring 2023
3 March 2023
First, We Will Complete
Combinational Logic

2
We Covered Combinational Logic Blocks
◼ Basic logic gates (AND, OR, NOT, NAND, NOR, XOR)
◼ Decoder
◼ Multiplexer
◼ Full Adder
◼ Programmable Logic Array (PLA)
◼ Comparator
◼ Arithmetic Logic Unit (ALU)
◼ Tri-State Buffer

◼ Standard form representations: SOP & POS


◼ Logical completeness
◼ Logic simplification via Boolean Algebra
3
Recall: Implementing a Full Adder Using a PLA
A
X
B

C This input should not be


Y connected to any outputs We do not need
Connections
this output

Z ai
X
bi

ci
ci+1
Truth table of a full adder
ai bi carryi carryi+1 Si
0 0 0 0 0
si
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

4
Logical Completeness
Logical (Functional) Completeness
◼ Any logic function we wish to implement could be
accomplished with a PLA
❑ PLA consists of only AND gates, OR gates, and inverters
❑ We just have to program connections based on SOP of the
intended logic function

◼ The set of gates {AND, OR, NOT} is logically complete


because we can build a circuit to carry out the specification
of any truth table we wish, without using any other kind of
gate

◼ NAND is also logically complete. So is NOR.


❑ Your task: Prove this.

6
More Combinational Blocks

7
More Combinational Building Blocks
◼ H&H Chapter 2 in full
❑ Required Reading
❑ E.g., see Tri-state Buffer and Z values in Section 2.6

◼ H&H Chapter 5
❑ Will be required reading soon.

◼ You will benefit greatly by reading the “combinational”


parts of Chapter 5 soon.
❑ Sections 5.1 and 5.2
❑ E.g., Adder, Subtractor, Comparator, Shifter/Rotator,
Multiplier, Divider

8
Comparator

9
Equality Checker (Compare if Equal)
◼ Checks if two N-input values are exactly the same
◼ Example: 4-bit Comparator
ALU (Arithmetic Logic Unit)

11
ALU (Arithmetic Logic Unit)
◼ Combines a variety of arithmetic and logical operations into
a single unit (that performs only one function at a time)
◼ Usually denoted with this symbol:
Example ALU (Arithmetic Logic Unit)

13
More Combinational Building Blocks
◼ See H&H Chapter 5.2 for
❑ Subtractor (using 2’s Complement Representation)
❑ Shifter and Rotator
❑ Multiplier
❑ Divider
❑ …

14
More Combinational Building Blocks
◼ H&H Chapter 2 in full
❑ Required Reading
❑ E.g., see Tri-state Buffer and Z values in Section 2.6

◼ H&H Chapter 5
❑ Will be required reading soon.

◼ You will benefit greatly by reading the “combinational”


parts of Chapter 5 soon.
❑ Sections 5.1 and 5.2
❑ E.g., Adder, Subtractor, Comparator, Shifter/Rotator,
Multiplier, Divider

15
Tri-State Buffer

16
Tri-State Buffer
◼ A tri-state buffer enables gating of different signals onto a
wire

A tri-state buffer
acts like a switch

◼ Floating signal (Z): Signal that is not driven by any circuit


❑ Open circuit, floating wire

17
Example: Use of Tri-State Buffers
◼ Imagine a wire connecting the CPU and memory

❑ At any time only the CPU or the memory can place a value on
the wire, both not both

❑ You can have two tri-state buffers: one driven by CPU, the
other memory; and ensure at most one is enabled at any time

18
Example Design with Tri-State Buffers

GateCPU

CPU

GateMem

Memory
Shared Bus

19
Another Example

20
Multiplexer Using Tri-State Buffers

21
Recall: A 4-to-1 Multiplexer

22
Digging Deeper: Tri-State Buffer in CMOS
◼ How do you implement Tri-State Buffers using transistors?

http://people.ee.duke.edu/~krish/teaching/Lectures/CMOScircuits_2011.pdf 23
We Covered Combinational Logic Blocks
◼ Basic logic gates (AND, OR, NOT, NAND, NOR, XOR)
◼ Decoder
◼ Multiplexer
◼ Full Adder
◼ Programmable Logic Array (PLA)
◼ Comparator
◼ Arithmetic Logic Unit (ALU)
◼ Tri-State Buffer

◼ Standard form representations: SOP & POS


◼ Logical completeness
◼ Logic simplification via Boolean Algebra
24
Logic Simplification using
Boolean Algebra Rules

25
Recall: Full Adder in SOP Form Logic

Full Adder
ai

bi
ci+1 ai bi carryi carryi+1 Si
ci
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
si 0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

26
Goal: Simplified Full Adder

3-input XOR
3-input majority

How do we simplify Boolean logic?

How do we automate simplification?

27
Quick Recap on Logic Simplification
◼ The original Boolean expression (i.e., logic circuit) may not
be optimal

F = ~A(A + B) + (B + AA)(A + ~B)

◼ Can we reduce a given Boolean expression to an equivalent


expression with fewer terms?

F=A+B

◼ The goal of logic simplification:


❑ Reduce the number of gates/inputs
❑ Reduce implementation cost (and potentially latency & power)

A basis for what the automated design tools are doing today
28
Logic Simplification
◼ Systematic techniques for simplifications
❑ amenable to automation
ഥ + 𝑨𝑩
Key Tool: The Uniting Theorem — 𝑭 = 𝑨𝑩

𝑭= ഥ + 𝑨𝑩 = 𝑨 𝑩
𝑨𝑩 ഥ +𝑩 =𝑨 𝟏 =𝑨

B's value changes within the rows where F==1 (“ON set”)
A's value does NOT change within the ON-set rows
If an input (B) can change without changing the output, that input
value is not needed
➙ B is eliminated, A remains

ഥ𝑩
𝑮= 𝑨 ഥ + 𝑨𝑩
ഥ= 𝑨
ഥ+𝑨 𝑩
ഥ =𝑩

B's value stays the same within the ON-set rows

A's value changes within the ON-set rows


➙ A is eliminated, B remains
29
Logic Simplification
◼ Systematic techniques for simplifications
❑ amenable to automation
ഥ + 𝑨𝑩
Key Tool: The Uniting Theorem — 𝑭 = 𝑨𝑩

𝑭= ഥ + 𝑨𝑩 = 𝑨 𝑩
𝑨𝑩 ഥ +𝑩 =𝑨 𝟏 =𝑨

B's value changes within the rows where F==1 (“ON set”)
Essence of Simplification:
Find two-elementA's value doesof
subsets NOT
thechange within
ON-set the ON-set
where onlyrows
one variable
changes its value.
If anThis
inputsingle
(B) can varying variable
change without can the
changing be output,
eliminated!
that input
value is not needed
➙ B is eliminated, A remains

ഥ𝑩
𝑮= 𝑨 ഥ + 𝑨𝑩
ഥ= 𝑨
ഥ+𝑨 𝑩
ഥ =𝑩

B's value stays the same within the ON-set rows

A's value changes within the ON-set rows


➙ A is eliminated, B remains
30
Logic Simplification Example: Priority Circuit
◼ Priority Circuit
❑ Inputs: “Requestors” with priority levels
❑ Outputs: “Grant” signal for each requestor
❑ Example 4-bit priority circuit
❑ Real life example: Imagine a bus requested by 4 processors

31
Simplified Priority Circuit
◼ Priority Circuit
❑ Inputs: “Requestors” with priority levels
❑ Outputs: “Grant” signal for each requestor
❑ Example 4-bit priority circuit

X (Don’t Care) means I don’t care what the value of this input is

32
Logic Simplification:
Karnaugh Maps (K-Maps)

33
Karnaugh Maps are Fun…
◼ A pictorial way of minimizing circuits by visualizing
opportunities for simplification
◼ They are for you to study on your own…
❑ We may cover them later if time permits

◼ See backup slides


◼ Read H&H Section 2.7
◼ Watch videos of Lectures 5 and 6 from 2019 DDCA course:
❑ https://youtu.be/0ks0PeaOUjE?list=PL5Q2soXY2Zi8J58xLKBNF
QFHRO3GrXxA9&t=4570
❑ https://youtu.be/ozs18ARNG6s?list=PL5Q2soXY2Zi8J58xLKBN
FQFHRO3GrXxA9&t=220

34
We Are Done with Combinational Logic
◼ Building blocks of modern computers
❑ Transistors
❑ Logic gates

◼ Combinational circuits

◼ Boolean algebra

◼ Using Boolean algebra to represent combinational circuits

◼ Basic combinational logic blocks

◼ Simplifying combinational logic circuits


35
Agenda for Today and Next Week
◼ Today

❑ Start (and finish) Sequential Logic

◼ Next week

❑ Hardware Description Languages and Verilog


◼ Combinational Logic
◼ Sequential Logic

❑ Timing and Verification

36
Extra Credit Assignment 1: Talk Analysis
◼ Intelligent Architectures for Intelligent Machines
◼ Watch and analyze this short lecture (33 minutes)
❑ https://www.youtube.com/watch?v=WxHribseelw (Oct 2022)

◼ Assignment – for 1% extra credit


❑ Write a good 1-page summary (following our guidelines)
◼ What are your key takeaways?
◼ What did you learn?
◼ What did you like or dislike?
◼ Submit your summary to Moodle – deadline April 1
37
Extra Credit Assignment 2: Moore’s Law
◼ Paper review
◼ G.E. Moore. "Cramming more components onto integrated
circuits," Electronics magazine, 1965

◼ Optional Assignment – for 1% extra credit


❑ Write a 1-page review
❑ Upload PDF file to Moodle – Deadline: April 1

◼ I strongly recommend that you follow my guidelines for


(paper) review (see next slide)

38
Extra Credit Assignment 2: Moore’s Law
◼ Guidelines on how to review papers critically

❑ Guideline slides: pdf ppt


❑ Video: https://www.youtube.com/watch?v=tOL6FANAJ8c

❑ Example reviews on “Main Memory Scaling: Challenges and


Solution Directions” (link to the paper)
◼ Review 1
◼ Review 2

❑ Example review on “Staged memory scheduling: Achieving


high performance and scalability in heterogeneous
systems” (link to the paper)
◼ Review 1
39
A 5-Minute Video on Transistor Innovation

https://www.youtube.com/watch?v=Z7M8etXUEUU 40
A 5-Minute Video on Transistor Innovation

https://www.youtube.com/watch?v=Z7M8etXUEUU 41
Enabling Manufacturing Tech: EUV

https://www.youtube.com/watch?v=Jv40Viz-KTc 42
Assignment: Readings
◼ Combinational Logic
❑ P&P Chapter 3 until 3.3 + H&H Chapter 2
◼ Sequential Logic
❑ P&P Chapter 3.4 until end + H&H Chapter 3 in full
◼ Hardware Description Languages and Verilog
❑ H&H Chapter 4 in full
◼ Timing and Verification
❑ H&H Chapters 2.9 and 3.5 + (start Chapter 5)

◼ By the end of next week, make sure you are done with
❑ P&P Chapters 1-3 + H&H Chapters 1-4

43
Readings (for Next Week)
◼ Hardware Description Languages and Verilog
❑ H&H Chapter 4 in full

◼ Timing and Verification


❑ H&H Chapters 2.9 and 3.5 + (start Chapter 5)

◼ By tomorrow, make sure you are done with


❑ P&P Chapters 1-3 + H&H Chapters 1-4

44
Readings (for Next Next Week)
◼ Von Neumann Model, LC-3, and MIPS
❑ P&P, Chapters 4, 5
❑ H&H, Chapter 6
❑ P&P, Appendices A and C (ISA and microarchitecture of LC-3)
❑ H&H, Appendix B (MIPS instructions)

◼ Programming
❑ P&P, Chapter 6

◼ Recommended: Digital Building Blocks


❑ H&H, Chapter 5

45
Sequential Logic Circuits
and Design

46
What We Will Learn Today
◼ Circuits that can store information
❑ Cross-coupled inverter
❑ R-S Latch
❑ Gated D Latch
❑ D Flip-Flop
❑ Register

◼ Finite State Machines (FSM)


❑ State & Clock
❑ Asynchronous vs. Synchronous
❑ How to design FSMs

47
No Real Computer Can Work w/o Memory

Apple M1,
2021

Source: https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested 48
A Large Fraction of Modern Systems is Memory

A lot of
Storage DRAM SRAM DRAM Storage

Apple M1 Ultra System (2022)


https://www.gsmarena.com/apple_announces_m1_ultra_with_20core_cpu_and_64core_gpu-news-53481.php 49
A Large Fraction of Modern Systems is Memory

Processor chip Level 2 cache chip

Multi-chip module package

Intel Pentium Pro, 1995


By Moshen - http://en.wikipedia.org/wiki/Image:Pentiumpro_moshen.jpg, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=2262471
50
A Large Fraction of Modern Systems is Memory
L2 Cache

51
https://download.intel.com/newsroom/kits/40thanniversary/gallery/images/Pentium_4_6xx-die.jpg
Intel Pentium 4, 2000
A Large Fraction of Modern Systems is Memory

Core Count:
8 cores/16 threads

L1 Caches:
32 KB per core

L2 Caches:
512 KB per core

L3 Cache:
32 MB shared

AMD Ryzen 5000, 2020


https://wccftech.com/amd-ryzen-5000-zen-3-vermeer-undressed-high-res-die-shots-close-ups-pictured-detailed/ 52
Adding Even More Memory in 3D (2021)
AMD increases the L3 size of their 8-core Zen 3
processors from 32 MB to 96 MB

Additional 64 MB L3 cache die


stacked on top of the processor die
- Connected using Through Silicon Vias (TSVs)
https://community.microcenter.com/discussion/5
134/comparing-zen-3-to-zen-2
- Total of 96 MB L3 cache

https://youtu.be/gqAYMx34euU 53
https://www.tech-critter.com/amd-keynote-computex-2021/
A Large Fraction of Modern Systems is Memory
IBM POWER10,
2020

Cores:
15-16 cores,
8 threads/core

L2 Caches:
2 MB per core

L3 Cache:
120 MB shared

https://www.it-techblog.de/ibm-power10-prozessor-mehr-speicher-mehr-tempo-mehr-sicherheit/09/2020/ 54
A Large Fraction of Modern Systems is Memory

Cores:
128 Streaming Multiprocessors

L1 Cache or
Scratchpad:
192KB per SM
Can be used as L1 Cache
and/or Scratchpad

L2 Cache:
40 MB shared

Nvidia Ampere, 2020


https://www.tomshardware.com/news/infrared-photographer-photos-nvidia-ga102-ampere-silicon 55
Cerebras’s Wafer Scale Engine-2 (2021)
◼ The largest ML accelerator chip

◼ 850,000 cores

◼ 40 GB of on-chip memory

◼ 20 PB/s memory bandwidth

Cerebras WSE-2 Largest GPU


2.6 Trillion transistors 54.2 Billion transistors
46,225 mm2 826 mm2
NVIDIA Ampere GA100
https://cerebras.net/product/#overview
56
Circuits that Can
Store Information

57
Introduction
◼ Combinational circuit output depends only on current input
◼ We want circuits that produce output depending on
current and past input values – circuits with memory
◼ How can we design a circuit that stores information?

Sequential Circuit

outputs
inputs

Combinational
Circuit

Storage
Element

58
Capturing Data

59
Basic Element: Cross-Coupled Inverters

◼ Has two stable states: Q=1 or Q=0.


◼ Has a third possible “metastable” state with both outputs
oscillating between 0 and 1 (we will see this later)
◼ Not useful without a control mechanism for setting Q
Image source: Harris and Harris, Digital Design and Computer Architecture, 2nd Ed., p.110. 60
More Realistic Storage Elements
◼ Have a control mechanism for setting Q
❑ We will see the R-S latch soon
❑ Let’s look at an SRAM (static random access memory) cell first

bitline bitline
wordline

SRAM cell
◼ We will get back to SRAM (and DRAM) later
61
The Big Picture: Storage Elements
◼ Latches and Flip-Flops
❑ Very fast, parallel access
❑ Very expensive (one bit costs tens of transistors)

◼ Static RAM (SRAM)


❑ Relatively fast
❑ Expensive (one bit costs 6+ transistors)

◼ Dynamic RAM (DRAM)


❑ Slower, reading destroys content (refresh), needs special process
for manufacturing
❑ Cheap (one bit costs only one transistor plus one capacitor)

◼ Other storage technology (flash memory, hard disk, tape)


❑ Much slower, access takes a long time, non-volatile
❑ Very cheap
Basic Storage Element:
The R-S Latch

63
The R-S (Reset-Set) Latch
◼ Cross-coupled NAND gates
❑ Data is stored at Q (inverse at Q’)
❑ S and R are control inputs
◼ In quiescent (idle) state, both S and R are held at 1

◼ S (set): drive S to 0 (keeping R at 1) to change Q to 1


◼ R (reset): drive R to 0 (keeping S at 1) to change Q to 0
◼ S and R should never both be 0 at the same time

S Q
Input Output
R S Q
1 1 Qprev
1 0 1
0 1 0
R Q’ 0 0 Forbidden

64
Why not R=S=0?
S
1
0 Q
Input Output

1
0 R S Q
1 1 Qprev
1 0 1
1
0 0 1 0
R Q’ 0 0 Forbidden
1
0
1. If R=S=0, Q and Q’ will both settle to 1, which breaks
our invariant that Q = !Q’
2. If S and R transition back to 1 at the same time, Q and Q’
begin to oscillate between 1 and 0 because their final
values depend on each other (metastability)
❑ This eventually settles depending on variation in the
circuits (more on this in the Timing Lecture)
65
The Gated D Latch

66
The Gated D Latch
◼ How do we guarantee correct operation of an R-S Latch?

S Q

Q’
R

67
The Gated D Latch
◼ How do we guarantee correct operation of an R-S Latch?
❑ Add two more NAND gates!

D S Q

Write
Enable

Q’
R

❑ Q takes the value of D, when write enable (WE) is set to 1


❑ S and R can never be 0 at the same time!

68
The Gated D Latch

D S
Q

Write
Enable

Q’
R
Input Output
WE D Q
0 0 Qprev
0 1 Qprev
1 0 0
1 1 1

69
The Register

70
The Register
How can we use D latches to store more data?
• Use more D latches!
• A single WE signal for all latches for
simultaneous writes Here we have a
register, or a
D3 D2 D1 D0
structure that
stores more than
Write
Enable
one bit and can be
read from and
written to

This register holds


4 bits, and its data
Q3 Q2 Q1 Q0 is referenced as
Q[3:0]
71
The Register
How can we use D latches to store more data?
• Use more D latches!
• A single WE signal for all latches for
simultaneous writes Here we have a
register, or a
D3:0
structure that
4 stores more than
one bit and can be
read from and
WE Register x (Rx) written to

This register holds


4 4 bits, and its data
is referenced as
Q3:0
Q[3:0]
72
Memory

73
Memory
◼ Memory is comprised of locations that can be written to or
read from. An example memory array with 4 locations:

Addr(00): 0100 1001 Addr(01): 0100 1011


Addr(10): 0010 0010 Addr(11): 1100 1001

◼ Every unique location in memory is indexed with a unique


address. 4 locations require 2 address bits
(log[#locations]).
◼ Addressability: the number of bits of information stored
in each location. This example: addressability is 8 bits.
◼ The entire set of unique locations in memory is referred
to as the address space.
◼ Typical memory is MUCH larger (e.g., billions of locations)
74
Addressing Memory
Let’s implement a simple memory array with:
• 3-bit addressability & address space size of 2 (total of 6 bits)
1 Bit
D Q
WE

6-Bit Memory Array

Addr(0) Bit2 Bit1 Bit0

Addr(1) Bit2 Bit1 Bit0

75
Reading from Memory
How can we select an address to read?
• Because there are 2 addresses, address size is log(2)=1 bit

76
Reading from Memory
How can we select an address to read?
• Because there are 2 addresses, address size is log(2)=1 bit
Addr[0]

Wordline

D[2] D[1] D[0]

77
Reading from Memory
How can we select an address to read?
• Because there are 2 addresses, address size is log(2)=1 bit
Addr[0]

Wordline

Address Decoder

D[2] D[1] D[0]

78
Reading from Memory
How can we select an address to read?
• Because there are 2 addresses, address size is log(2)=1 bit
Addr[0]

Wordline

Address Decoder

D[2] D[1] D[0]

Multiplexer
(together w/ decoder)

79
Recall: Multiplexer (MUX), or Selector
◼ Selects one of the N inputs to connect it to the output
❑ based on the value of a log2N-bit control input called select

◼ Example: 2-to-1 MUX


A B A B

S S=0

a b A 0

A
C C
Writing to Memory
How can we select an address and write to it?

81
Writing to Memory
How can we select an address and write to it?
• Input is indicated with Di
Addr[0]
Di[2] Di[1] Di[0]
WE

82
Putting it all Together
Let’s enable reading from and writing to a memory array

Addr[0]
Di[2] Di[1] Di[0]
WE

D[2] D[1] D[0]

83
A Bigger Memory Array (4 locations X 3 bits)
Addr[1:0]
Di[2] Di[1] Di[0]
WE

D[2] D[1] D[0]


84
A Bigger Memory Array (4 locations X 3 bits)
Addr[1:0]
Di[2] Di[1] Di[0]
WE

Address Decoder

Multiplexer D[2] D[1] D[0]


(together w/ decoder) 85
Example: Reading Location 3

Image source: Patt and Patel, “Introduction to Computing Systems”, 3rd ed., page 78. 86
Recall: Decoder (II)
◼ The decoder is useful in determining how to interpret a bit
pattern

❑ It could be the A=1


address of a location 0
B=0
in memory, that the
processor intends to
read from 0

❑ It could be an
instruction in the 1
program and the
processor needs to
decide what action to 0
take (based on
instruction opcode)
87
Recall: A 4-to-1 Multiplexer

88
Aside: Implementing Logic Functions
Using Memory

89
Recall: A Bigger Memory Array (4 locations X 3 bits)
Addr[1:0]
Di[2] Di[1] Di[0]
WE

Address Decoder

Multiplexer D[2] D[1] D[0]


(together w/ decoder) 90
Memory-Based Lookup Table Example
◼ Memory arrays can also perform Boolean Logic functions
❑ 2N-location M-bit memory can perform any N-input, M-output function
❑ Lookup Table (LUT): Memory array used to perform logic functions
❑ Each address: row in truth table; each data bit: corresponding output value

91
Lookup Tables (LUTs)
◼ LUTs are commonly used in FPGAs
❑ To enable programmable/reconfigurable logic functions
❑ To enable easy integration of combinational and sequential
logic

Read H&H Chapter 5.6.2 92


Recall: A Multiplexer-Based LUT
◼ Let’s implement a function that outputs ‘1’ when there are at least two ‘1’s in a
3-bit input
In Hardware (e.g., FPGA):
In C: Data Inputs
int count = 0; 000 0
for(int i = 0; i < 3; i++) { Configuration Memory
count += input & 1; 001 0
input = input >> 1;
} 010 0
if(count > 1) return 1; 011 1
output (1 bit)
return 0; 100 0
switch(input){ 101
case 0: 1
case 1:
case 2: 110 1
case 4:
return 0; 111 1
default:
return 1;} 3
input (3 bits)

93
Sequential Logic Circuits

94
Sequential Logic Circuits
◼ We have examined designs of circuit elements that can
store information
◼ Now, we will use these elements to build circuits that
remember past inputs

Combinational Sequential
Only depends on current inputs Opens depending on past inputs
https://www.easykeys.com/228_ESP_Combination_Lock.aspx
https://www.fosmon.com/product/tsa-approved-lock-4-dial-combo 95
State
◼ In order for this lock to work, it has to keep track
(remember) of the past events!
◼ If passcode is R13-L22-R3, sequence of states to unlock:
A. The lock is not open (locked), and no relevant operations have
been performed
B. Locked but user has completed R13
C. Locked but user has completed R13-L22
D. Unlocked: user has completed R13-L22-R3

◼ The state of a system is a snapshot of all relevant


elements of the system at the moment of the snapshot
❑ To open the lock, states A-D must be completed in order
❑ If anything else happens (e.g., L5), lock returns to state A

96
State Diagram of Our Sequential Lock
◼ Completely describes the operation of the sequential lock

◼ We will understand “state diagrams” fully later today


Image source: Patt and Patel, “Introduction to Computing Systems”, 2nd ed., page 76. 97
Asynchronous vs. Synchronous State Changes
◼ Sequential lock we saw is an asynchronous “machine”
❑ State transitions occur when they occur
❑ There is nothing that synchronizes when each state transition
must occur

◼ Most modern computers are synchronous “machines”


❑ State transitions take place after fixed units of time
❑ Controlled in part by a clock, as we will see soon

◼ These are two different design paradigms, with tradeoffs

98
Another Simple Example of State
◼ A standard Swiss traffic light has 4 states
A. Green
B. Yellow
C. Red
D. Red and Yellow

◼ The sequence of these states are always as follows

A B C D

99
Changing State: The Notion of Clock (I)

A B C D

◼ When should the light change from one state to another?


◼ We need a clock to dictate when to change state
❑ Clock signal alternates between 0 & 1
1
CLK:
0

100
Changing State: The Notion of Clock (I)

A B C D

◼ When should the light change from one state to another?


◼ We need a clock to dictate when to change state
❑ Clock signal alternates between 0 & 1
1
CLK:
0

◼ At the start of a clock cycle ( ), system state changes


❑ During a clock cycle, the state stays constant
❑ In this traffic light example, we are assuming the traffic light stays in
each state an equal amount of time
101
Changing State: The Notion of Clock (II)
◼ Clock is a general mechanism that triggers transition from
one state to another in a (synchronous) sequential circuit

◼ Clock synchronizes state changes across many sequential


circuit elements

◼ Combinational logic evaluates for the length of the clock


cycle

◼ Clock cycle should be chosen to accommodate maximum


combinational circuit delay
❑ More on this later, when we discuss timing

102
Asynchronous vs. Synchronous State Changes
◼ Sequential lock we saw is an asynchronous “machine”
❑ State transitions occur when they occur
❑ There is nothing that synchronizes when each state transition
must occur

◼ Most modern computers are synchronous “machines”


❑ State transitions take place after fixed units of time
❑ Controlled in part by a clock, as we will see soon

◼ These are two different design paradigms, with tradeoffs


❑ Synchronous control can be easier to get correct when the
system consists of many components and many states
❑ Asynchronous control can be more efficient (no clock overheads)

We will assume synchronous systems in most of this course 103


Finite State Machines

104
Finite State Machines
◼ What is a Finite State Machine (FSM)?
❑ A discrete-time model of a stateful system
❑ Each state represents a snapshot of the system at a given time

◼ An FSM pictorially shows


1. the set of all possible states that a system can be in
2. how the system transitions from one state to another

◼ An FSM can model


❑ A traffic light, an elevator, fan speed, a microprocessor, etc.

◼ An FSM enables us to pictorially think of a stateful


system using simple diagrams
105
Finite State Machines (FSMs) Consist of:
◼ Five elements:
1. A finite number of states
◼ State: snapshot of all relevant elements of the
system at the time of the snapshot
2. A finite number of external inputs
3. A finite number of external outputs
4. An explicit specification of all state transitions
◼ How to get from one state to another
5. An explicit specification of what determines
each external output value

106
Finite State Machines (FSMs)
◼ Each FSM consists of three separate parts:
❑ next state logic
❑ state register
❑ output logic

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

state register

At the beginning of the clock cycle, next state is latched into the state register

107
Finite State Machines (FSMs) Consist of:
◼ Sequential Circuits CLK

❑ State register(s)
S’
◼ Store the current state and S
Next Current
◼ Load the next state at the clock edge State State

◼ Combinational Circuits Next State


Logic
❑ Next state logic
◼ Determines what the next state will be CL Next
State

Output
❑ Output logic Logic
◼ Generates the outputs
CL Outputs

108
Finite State Machines (FSMs) Consist of:
◼ Sequential Circuits CLK

❑ State register(s)
S’
◼ Store the current state and S
Next Current
◼ Provide the next state at the clock edge State State

◼ Combinational Circuits Next State


Logic
❑ Next state logic
◼ Determines what the next state will be CL Next
State

Output
❑ Output logic Logic
◼ Generates the outputs
CL Outputs

109
State Register Implementation
◼ How can we implement a state register? Two properties:
1. We need to store data at the beginning of every clock cycle

2. The data must be available during the entire clock cycle


1
CLK:
0

Register
Input:

Register Desired behavior


Output:

110
The Problem with Latches
Recall the D Q
Gated D Latch
CLK = WE

◼ Currently, we cannot simply wire a clock to WE of a latch


❑ Whenever the clock is high, the latch propagates D to Q
❑ The latch is transparent

1
CLK:
0

Register
Input:

Register
Output:

111
The Problem with Latches
Recall the D Q
Gated D Latch
CLK = WE

◼ Currently, we cannot simply wire a clock to WE of a latch


❑ Whenever the clock is high, the latch propagates D to Q
❑ The latch is transparent

1
CLK:
0

Register
Input:

Register
Output:
Undesirable!
112
The Problem with Latches
Recall the D Q
Gated D Latch
CLK = WE

◼ Currently, we cannot simply wire a clock to WE of a latch


❑ When the clock is high Q will not take on D’s value AND
❑ Howthe
When can weis change
clock thewilllatch,
low the latch soDthat
propagate to Q

1
1) D (input) is observable at Q (output)
CLK:
0

only
Input:
at the beginning of next clock cycle?

2) Q is available for the full clock cycle


Output:

113
The Need for a New Storage Element
◼ To design viable FSMs

◼ We need storage elements that allow us to:

❑ read the current state throughout the entire current clock


cycle

AND

❑ not write the next state values into the storage elements
until the beginning of the next clock cycle

114
The D Flip-Flop
◼ 1) state change on clock edge, 2) data available for full cycle

D Latch (First)
D D Latch (Second)

CLK
1
CLK:
0
◼ When the clock is low, 1st latch propagates D to the input of the 2nd (Q unchanged)
◼ Only when the clock is high, 2nd latch latches D (Q stores D)
❑ At the rising edge of clock (clock going from 0->1), Q gets assigned D
115
The D Flip-Flop
◼ 1) state change on clock edge, 2) data available for full cycle

D CLK Q

__
Q
D Flip-Flop

1
CLK:
0

◼ At the rising edge of clock (clock going from 0->1), Q gets assigned D
◼ At all other times, Q is unchanged

116
The D Flip-Flop
◼ 1) state change on clock edge, 2) data available for full cycle

D CLK Q

__
Q
We can use D Flip-Flops
D Flip-Flop
to implement the state register
1
CLK:
0

◼ At the rising edge of clock (clock going from 0->1), Q gets assigned D
◼ At all other times, Q is unchanged

117
Rising-Clock-Edge Triggered Flip-Flop
◼ Two inputs: CLK, D

D Flip-Flop
◼ Function CLKSymbols
❑ The flip-flop “samples” D on the rising edge
of CLK (positive edge)
D Q
❑ When CLK rises from 0 to 1, D passes
Q
through to Q
❑ Otherwise, Q holds its previous value

❑ Q changes only on the rising edge of CLK

◼ A flip-flop is called an edge-triggered state element


because it captures data on the clock edge
❑ A latch is a level-triggered state element
118
We Covered
Until This Point
in Lecture

119
Digital Design & Computer Arch.
Lecture 4: Sequential Logic Design

Prof. Onur Mutlu

ETH Zürich
Spring 2023
3 March 2023
Slides for Future Lectures

121
D Flip-Flop Based Register
◼ Multiple parallel D flip-flops, each of which storing 1 bit
CLK

D0 D Q Q0

CLK
D1 D Q Q1
4 4
D3:0 Q3:0
D2 D Q Q2

This line represents 4 wires


D3 D Q Q3
This register stores 4 bits

122
A 4-Bit D-Flip-Flop-Based Register (Internally)

Image source: Patt and Patel, “Introduction to Computing Systems”, 3rd ed., tentative page 95. 123
Finite State Machines (FSMs)
◼ Next state is determined by the current state and the inputs
◼ Two types of finite state machines differ in the output
logic:
❑ Moore FSM: outputs depend only on the current state

Moore FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

Mealy FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

124
Finite State Machines (FSMs)
◼ Next state is determined by the current state and the inputs
◼ Two types of finite state machines differ in the output
logic:
❑ Moore FSM: outputs depend only on the current state
❑ Mealy FSM: outputs depend on the current state and the
inputs Moore FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

Mealy FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

125
Finite State Machine Example
◼ “Smart” traffic light controller
❑ 2 inputs:
◼ Traffic sensors: TA , TB (TRUE when there’s traffic)
❑ 2 outputs:
◼ Lights: LA , LB (Red, Yellow, Green)

Bravado
❑ State can change every 5 seconds Dining
Hall
◼ Except if green and traffic, stay green
LB

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms

Blvd.
From H&H Section 3.4.1 Fields
126
Finite State Machine Black Box
◼ Inputs: CLK, Reset, TA , TB
◼ Outputs: LA , LB
CLK

TA Traffic LA
Light
TB Controller LB

Reset

127
Finite State Machine Transition Diagram
◼ Moore FSM: outputs labeled in each state
❑ States: Circles
❑ Transitions: Arcs

Reset
Bravado

Dining S0
Hall LA: green
LB LB: red

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms
Blvd.

Fields
128
Finite State Machine Transition Diagram
◼ Moore FSM: outputs labeled in each state
❑ States: Circles
❑ Transitions: Arcs

TA
Reset
Bravado

Dining S0 TA S1
Hall LA: green LA: yellow
LB LB: red LB: red

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms S3 S2
LA: red LA: red
LB: yellow LB: green
Blvd.

TB
Fields TB

129
Finite State Machine Transition Diagram
◼ Moore FSM: outputs labeled in each state
❑ States: Circles
❑ Transitions: Arcs

TA
Reset
Bravado

Dining S0 TA S1
Hall LA: green LA: yellow
LB LB: red LB: red

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms S3 S2
LA: red LA: red
LB: yellow LB: green
Blvd.

TB
Fields TB

130
Finite State Machine Transition Diagram
◼ Moore FSM: outputs labeled in each state
❑ States: Circles
❑ Transitions: Arcs

TA
Reset
Bravado

Dining S0 TA S1
Hall LA: green LA: yellow
LB LB: red LB: red

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms S3 S2
LA: red LA: red
LB: yellow LB: green
Blvd.

TB
Fields TB

131
Finite State Machine Transition Diagram
◼ Moore FSM: outputs labeled in each state
❑ States: Circles
❑ Transitions: Arcs

TA
Reset
Bravado

Dining S0 TA S1
Hall LA: green LA: yellow
LB LB: red LB: red

LA TB
LA

Academic TA TA Ave.

Labs TB LB Dorms S3 S2
LA: red LA: red
LB: yellow LB: green
Blvd.

TB
Fields TB

132
Finite State Machine:
State Transition Table

133
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S TA TB S'
LA: green LA: yellow
LB: red LB: red S0 0 X
S0 1 X
S1 X X
S2 X 0
S3 S2 S2 X 1
LA: red LA: red
LB: yellow LB: green S3 X X
TB
TB
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S TA TB S'
LA: green LA: yellow
LB: red LB: red S0 0 X S1
S0 1 X S0
S1 X X S2
S2 X 0 S3
S3 S2 S2 X 1 S2
LA: red LA: red
LB: yellow LB: green S3 X X S0
TB
TB
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S TA TB S'
LA: green LA: yellow
LB: red LB: red S0 0 X S1
S0 1 X S0
S1 X X S2
S2 X 0 S3
S3 S2 S2 X 1 S2
LA: red LA: red
LB: yellow LB: green S3 X X S0
TB
TB State Encoding
S0 00
S1 01
S2 10
S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S2 10
S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S’1 = ?
S2 10
S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S’1 = (S1 ∙ S0) + (S1 ∙ S0 ∙ TB) + (S1 ∙ S0 ∙ TB)
S2 10
S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S’1 = (S1 ∙ S0) + (S1 ∙ S0 ∙ TB) + (S1 ∙ S0 ∙ TB)
S2 10
S’0 = ? S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S’1 = (S1 ∙ S0) + (S1 ∙ S0 ∙ TB) + (S1 ∙ S0 ∙ TB)
S2 10
S’0 = (S1 ∙ S0 ∙ TA) + (S1 ∙ S0 ∙ TB) S3 11
FSM State Transition Table
Reset
TA Current State Inputs Next State
S0 TA S1 S1 S0 TA TB S’1 S’0
LA: green LA: yellow
LB: red LB: red 0 0 0 X 0 1
0 0 1 X 0 0
0 1 X X 1 0
1 0 X 0 1 1
S3 S2 1 0 X 1 1 0
LA: red LA: red
LB: yellow LB: green 1 1 X X 0 0
TB
TB State Encoding
S0 00
S1 01
S’1 = S1 xor S0 (Simplified)
S2 10
S’0 = (S1 ∙ S0 ∙ TA) + (S1 ∙ S0 ∙ TB) S3 11
Finite State Machine:
Output Table

143
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA LB
LA: green LA: yellow
LB: red LB: red 0 0 green red
0 1 yellow red
1 0 red green
1 1 red yellow
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA LB
LA: green LA: yellow
LB: red LB: red 0 0 green red
0 1 yellow red
1 0 red green
1 1 red yellow
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB Output Encoding
green 00
yellow 01
red 10
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA1 LA0 LB1 LB0
LA: green LA: yellow
LB: red LB: red 0 0 0 0 1 0
0 1 0 1 1 0
1 0 1 0 0 0
1 1 1 0 0 1
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB Output Encoding
green 00
LA1 = S1
yellow 01
red 10
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA1 LA0 LB1 LB0
LA: green LA: yellow
LB: red LB: red 0 0 0 0 1 0
0 1 0 1 1 0
1 0 1 0 0 0
1 1 1 0 0 1
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB Output Encoding
green 00
LA1 = S1
yellow 01
LA0 = S1 ∙ S0
red 10
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA1 LA0 LB1 LB0
LA: green LA: yellow
LB: red LB: red 0 0 0 0 1 0
0 1 0 1 1 0
1 0 1 0 0 0
1 1 1 0 0 1
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB Output Encoding
green 00
LA1 = S1
yellow 01
LA0 = S1 ∙ S0
LB1 = S1 red 10
FSM Output Table
Reset
TA Current State Outputs
S0 TA S1 S1 S0 LA1 LA0 LB1 LB0
LA: green LA: yellow
LB: red LB: red 0 0 0 0 1 0
0 1 0 1 1 0
1 0 1 0 0 0
1 1 1 0 0 1
S3 S2
LA: red LA: red
LB: yellow LB: green
TB
TB Output Encoding
green 00
LA1 = S1
yellow 01
LA0 = S1 ∙ S0
LB1 = S1 red 10
LB0 = S1 ∙ S0
Finite State Machine:
Schematic
FSM Schematic: State Register

151
FSM Schematic: State Register
CLK
S'1 S1

S'0 S0
r
Reset
state register

152
FSM Schematic: Next State Logic
CLK
S'1 S1

TA S'0 S0
r
TB Reset
S1 S0

inputs next state logic state register

S’1 = S1 xor S0

S’0 = (S1 ∙ S0 ∙ TA) + (S1 ∙ S0 ∙ TB)

153
FSM Schematic: Output Logic
CLK LA1
S'1 S1
LA0

TA S'0 S0
LB1
r
TB Reset
S1 S0 LB0

inputs next state logic state register output logic outputs

LA1 = S1
LA0 = S1 ∙ S0
LB1 = S1
LB0 = S1 ∙ S0
154
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

155
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

156
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

157
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

158
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

159
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

160
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

161
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
This is from H&H Section 3.4.1 TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

162
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

S3 S2
LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

163
FSM Timing Diagram Reset
S0
TA__
TA S1
LA: yellow LA: yellow
LB: red LB: red

See H&H Chapter 3.4 S3 S2


LA: red LA: red
LB: yellow __ LB: green
TB
TB

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10

CLK

Reset

TA

TB

S'1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00) S1 (01)

S1:0 ?? S0 (00) S1 (01) S2 (10) S3 (11) S0 (00)

LA1:0 ?? Green (00) Yellow (01) Red (10) Green (00)

LB1:0 ?? Red (10) Green (00) Yellow (01) Red (10)

0 5 10 15 20 25 30 35 40 45 t (sec)

164
Finite State Machine:
State Encoding

165
FSM State Encoding
◼ How do we encode the state bits?
❑ Three common state binary encodings with different tradeoffs
1. Fully Encoded
2. 1-Hot Encoded
3. Output Encoded

◼ Let’s see an example Swiss traffic light with 4 states


❑ Green, Yellow, Red, Yellow+Red

166
FSM State Encoding (II)
1. Binary Encoding (Full Encoding):
❑ Use the minimum possible number of bits

◼ Use log2(num_states) bits to represent the states

❑ Example state encodings: 00, 01, 10, 11

❑ Minimizes # flip-flops, but not necessarily output logic or


next state logic

2. One-Hot Encoding:
❑ Each bit encodes a different state

◼ Uses num_states bits to represent the states

◼ Exactly 1 bit is “hot” for a given state


❑ Example state encodings: 0001, 0010, 0100, 1000
❑ Simplest design process – very automatable
❑ Maximizes # flip-flops, minimizes next state logic
167
FSM State Encoding (III)
3. Output Encoding:
❑ Outputs are directly accessible in the state encoding

❑ For example, since we have 3 outputs (light color),


encode state with 3 bits, where each bit represents a
color
❑ Example states: 001, 010, 100, 110
◼ Bit0 encodes green light output,
◼ Bit1 encodes yellow light output
◼ Bit2 encodes red light output

❑ Minimizes output logic


❑ Only works for Moore Machines (output function of state)
168
FSM State Encoding (III)
3. Output Encoding:
❑ Outputs are directly accessible in the state encoding

❑ For example, since we have 3 outputs (light color),


encode state with 3 bits, where each bit represents a
The designer must carefully choose
color
an encoding
❑ Example scheme
states: to100,
001, 010, optimize
110 the design

under given constraints
Bit0 encodes green light output,
◼ Bit1 encodes yellow light output
◼ Bit2 encodes red light output

❑ Minimizes output logic


❑ Only works for Moore Machines (output function of state)
169
Moore vs. Mealy Machines

170
Recall: Moore vs. Mealy FSMs
◼ Next state is determined by the current state and the inputs
◼ Two types of FSMs differ in the output logic:
❑ Moore FSM: outputs depend only on the current state
❑ Mealy FSM: outputs depend on the current state and the
inputs
Moore FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

Mealy FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

171
Moore vs. Mealy FSM Examples
◼ Alyssa P. Hacker has a snail that crawls down a paper tape with
1’s and 0’s on it.
◼ The snail smiles whenever the last four digits it has crawled over
are 1101.
◼ Design Moore and Mealy FSMs of the snail’s brain.
Moore FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

Mealy FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

172
Moore vs. Mealy FSM Examples
◼ Alyssa P. Hacker has a snail that crawls down a paper tape with
1’s and 0’s on it.
◼ The snail smiles whenever the last four digits it has crawled over
are 1101.
◼ Design Moore and Mealy FSMs of the snail’s brain.
Moore FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

Mealy FSM

CLK
M next
next k state k state output N
inputs state outputs
logic
logic

173
State Transition Diagrams
Moore FSM 1
reset
1 1 0 1

S0 S1 S2 S3 S4
0 0 0 0 1
0
1 0 0
0

What are the tradeoffs?


Mealy FSM
1/1
reset
1/0 1/0 0/0

S0 S1 S2 S3
0/0 1/0 0/0
0/0

174
FSM Design Procedure
◼ Determine all possible states of your machine

◼ Develop a state transition diagram


❑ Generally this is done from a textual description
❑ You need to 1) determine the inputs and outputs for each state and
2) figure out how to get from one state to another

◼ Approach
❑ Start by defining the reset state and what happens from it – this is
typically an easy point to start from
❑ Then continue to add transitions and states
❑ Picking good state names is very important
❑ Building an FSM is like programming (but it is not programming!)
◼ An FSM has a sequential “control-flow” like a program with conditionals and goto’s
◼ The if-then-else construct is controlled by one or more inputs
◼ The outputs are controlled by the state or the inputs
❑ In hardware, we typically have many concurrent FSMs
175
What is to Come: LC-3 Processor

176
What is to Come: LC-3 Datapath

177
Backup Slides:
Different Types of Flip Flops

178
Enabled Flip-Flops
◼ Inputs: CLK, D, EN
❑ The enable input (EN) controls when new data (D) is stored
◼ Function:
❑ EN = 1: D passes through to Q on the clock edge
❑ EN = 0: the flip-flop retains its previous state

Internal
Circuit Symbol

EN CLK

0
D Q Q D Q
D 1
EN

179
Resettable Flip-Flop
◼ Inputs: CLK, D, Reset
❑ The Reset is used to set the output to 0.
◼ Function:
❑ Reset = 1: Q is forced to 0

❑ Reset = 0: the flip-flop behaves like an ordinary D flip-flop

Symbols

D Q
r
Reset

180
Resettable Flip-Flops
◼ Two types:
❑ Synchronous: resets at the clock edge only
❑ Asynchronous: resets immediately when Reset = 1
◼ Asynchronously resettable flip-flop requires changing the
internal circuitry of the flip-flop (see Exercise 3.10)
◼ Synchronously resettable flip-flop?

Internal
Circuit
CLK

D
D Q Q
Reset

181
Settable Flip-Flop
◼ Inputs: CLK, D, Set
◼ Function:
❑ Set = 1: Q is set to 1
❑ Set = 0: the flip-flop behaves like an ordinary D flip-flop

Symbols

D Q
s
Set

182
Backup Slides on
Karnaugh Maps (K-Maps)

183
Complex Cases
◼ One example
ഥ 𝑩𝑪 + 𝑨𝑩
𝑪𝒐𝒖𝒕 = 𝑨 ഥ + 𝑨𝑩𝑪
ഥ 𝑪 + 𝑨𝑩𝑪
◼ Problem
❑ Easy to see how to apply Uniting Theorem…
❑ Hard to know if you applied it in all the right places…
❑ …especially in a function of many more variables

◼ Question
❑ Is there an easier way to find potential simplifications?
❑ i.e., potential applications of Uniting Theorem…?

◼ Answer
❑ Need an intrinsically geometric representation for Boolean f( )
❑ Something we can draw, see…

184
Karnaugh Map
◼ Karnaugh Map (K-map) method
❑ K-map is an alternative method of representing the truth table
that helps visualize adjacencies in up to 6 dimensions
❑ Physical adjacency ↔ Logical adjacency

2-variable K-map 3-variable K-map 4-variable K-map


𝑩 0 1 𝑩𝑪 𝑪𝑫
𝑨
𝑨 00 01 11 10 𝑨𝑩 00 01 11 10
0 00 01
0 000 001 011 010 00 0000 0001 0011 0010

1 10 11
1 100 101 111 110 01 0100 0101 0111 0110

11 1100 1101 1111 1110

10 1000 1001 1011 1010

Numbering Scheme: 00, 01, 11, 10 is called a


“Gray Code” — only a single bit (variable) changes
from one code word and the next code word

185
Karnaugh Map Methods
Adjacent

𝑩𝑪
𝑨 00 01 11 10
000 100
010 110
001 101
0 000 001 011 010 011 111

1 100 101 111 110 000 010 110 100

001 011 111 101


Adjacent

K-map adjacencies go “around the edges”


Wrap around from first to last column
Wrap around from top row to bottom row

186
K-map Cover - 4 Input Variables

𝐅(𝐀, 𝐁, 𝐂, 𝐃) = ෍ 𝒎(𝟎, 𝟐, 𝟓, 𝟖, 𝟗, 𝟏𝟎, 𝟏𝟏, 𝟏𝟐, 𝟏𝟑, 𝟏𝟒, 𝟏𝟓)


𝑪𝑫
𝑨𝑩 00 01 11 10 ഥ𝑫 ഥ
ഥ + 𝐁𝑪𝑫
𝐅=𝐀+𝑩
00 1 0 0 1
01 0 1 0 0
11 1 1 1 1 Strategy for “circling” rectangles on Kmap:
10 1 1 1 1
As big as possible
Biggest “oops!” that people forget:
Wrap-arounds

187
Logic Minimization Using K-Maps
◼ Very simple guideline:
❑ Circle all the rectangular blocks of 1’s in the map, using the
fewest possible number of circles
◼ Each circle should be as large as possible
❑ Read off the implicants that were circled

◼ More formally:
❑ A Boolean equation is minimized when it is written as a sum of
the fewest number of prime implicants
❑ Each circle on the K-map represents an implicant
❑ The largest possible circles are prime implicants

189
K-map Rules
◼ What can be legally combined (circled) in the K-map?
❑ Rectangular groups of size 2k for any integer k
❑ Each cell has the same value (1, for now)
❑ All values must be adjacent
◼ Wrap-around edge is okay

◼ How does a group become a term in an expression?


❑ Determine which literals are constant, and which vary across group
❑ Eliminate varying literals, then AND the constant literals
◼ constant 1 ➙ use 𝐗, constant 0 ➙ use 𝑿

◼ What is a good solution?


❑ Biggest groupings ➙ eliminate more variables (literals) in each term
❑ Fewest groupings ➙ fewer terms (gates) all together
❑ OR together all AND terms you create from individual groups
190
K-map Example: Two-bit Comparator
A B C D F1 F2 F3
0 0 0 0 1 0 0
0 0 0 1 0 1 0
A AB = CD 0 0 1 0 0 1 0
F1
0 0 1 1 0 1 0
B AB < CD 0 1 0 0 0 0 1
F2
0 1 0 1 1 0 0
C AB > CD 0 1 1 0 0 1 0
F3
0 1 1 1 0 1 0
D 1 0 0 0 0 0 1
1 0 0 1 0 0 1
1 0 1 0 1 0 0
1 0 1 1 0 1 0
Design Approach: 1 1 0 0 0 0 1

Write a 4-Variable K-map 1 1 0 1 0 0 1


for each of the 3 1 1 1 0 0 0 1
output functions 1 1 1 1 1 0 0

191
K-map Example: Two-bit Comparator (2)
K-map for F1 A B C D F1 F2 F3
0 0 0 0 1 0 0
𝑪
0 0 0 1 0 1 0
𝑪𝑫 00 01 11 10 0 0 1 0 0 1 0
𝑨𝑩
00 0 0 1 1 0 1 0
1 0 1 0 0 0 0 1

01 0 1 0 1 1 0 0
1 0 1 1 0 0 1 0

11 𝑩 0 1 1 1 0 1 0
1 1 0 0 0 0 0 1
𝑨
1 0 0 1 0 0 1
10
1 1 0 1 0 1 0 0
1 0 1 1 0 1 0
1 1 0 0 0 0 1
𝑫 1 1 0 1 0 0 1
1 1 1 0 0 0 1
F1 = A'B'C'D' + A'BC'D + ABCD + AB'CD'
1 1 1 1 1 0 0

192
K-map Example: Two-bit Comparator (3)
K-map for F2 A B C D F1 F2 F3
𝑪 0 0 0 0 1 0 0
𝑪𝑫 00 01 11 10 0 0 0 1 0 1 0
𝑨𝑩 0 0 1 0 0 1 0
00
1 1 1 0 0 1 1 0 1 0
0 1 0 0 0 0 1
01
1 1 0 1 0 1 1 0 0
0 1 1 0 0 1 0
11 𝑩
0 1 1 1 0 1 0
𝑨 1 0 0 0 0 0 1
10
1 1 0 0 1 0 0 1
1 0 1 0 1 0 0
1 0 1 1 0 1 0
𝑫 1 1 0 0 0 0 1

F2 = A'C + A'B'D + B'CD 1 1 0 1 0 0 1


1 1 1 0 0 0 1
F3 = ? (Exercise for you) 1 1 1 1 1 0 0

193
K-maps with “Don’t Care”
◼ Don’t Care really means I don’t care what my circuit outputs if this
appears as input
❑ You have an engineering choice to use DON’T CARE patterns
intelligently as 1 or 0 to better simplify the circuit

A B C D F G

•••
I can pick 00, 01, 10, 11
0 1 1 0 X X independently of below
0 1 1 1
1 0 0 0 X X
1 0 0 1 I can pick 00, 01, 10, 11
independently of above
•••

194
Example: BCD Increment Function
◼ BCD (Binary Coded Decimal) digits
❑ Encode decimal digits 0 - 9 with bit patterns 00002 — 10012
❑ When incremented, the decimal sequence is 0, 1, …, 8, 9, 0, 1

A B C D W X Y Z
0 0 0 0 0 0 0 1
0 0 0 1 0 0 1 0
0 0 1 0 0 0 1 1
0 0 1 1 0 1 0 0
0 1 0 0 0 1 0 1
0 1 0 1 0 1 1 0
0 1 1 0 0 1 1 1
0 1 1 1 1 0 0 0
1 0 0 0 1 0 0 1
1 0 0 1 0 0 0 0
1 0 1 0 X X X X
1 0 1 1 X X X X
1 1 0 0 X X X X
These input patterns should
1 1 0 1 X X X X
never be encountered in practice
1 1 1 0 X X X X (hey -- it’s a BCD number!)
1 1 1 1 X X X X So, associated output values are
“Don’t Cares”
195
K-map for BCD Increment Function
W X
𝑪𝑫
Z
ABCD
(without
𝑨𝑩 don’t
00 01 cares)
11 10 = A'D'
𝑨𝑩
𝑪𝑫
+00B'C'D’
01 11 10
00 00 1
+ 1
Z (with don’t
01 cares)1 = D' 01 1 1 1
WXYZ 11 X X X X 11 X X X X
10 1 X X 10 X X

Y 𝑪𝑫
Z 𝑪
00 01 11 10 𝑪𝑫
𝑨𝑩 𝑨𝑩 00 01 11 10
00 1 1 00 1 1
01 1 1 01 1 1
11 X X X X 𝑩
𝑨 11 X X X X
10 X X 10 1 X X
𝑫 196
K-map Summary

◼ Karnaugh maps as a formal systematic approach


for logic simplification

◼ 2-, 3-, 4-variable K-maps

◼ K-maps with “Don’t Care” outputs

◼ H&H Section 2.7


197

You might also like