All Slides DT Only 2017
All Slides DT Only 2017
Discrete-Time
ELEC 301
Richard G. Baraniuk
Rice University
Copyright c 2016 Richard G. Baraniuk.
Written by Richard G. Baraniuk.
Digitized by JP Slavinsky, Daniel Williamson, Jared Adler, Abhijit Navlekar, Rachel Green,
Caroline Huang, Joy Dai, Isabella Gonzalez, & Matthew Moravec.
Examples:
• Speech signals transmit language via acoustic waves
• Radar signals transmit the position and velocity of targets via electromagnetic waves
• Electrophysiology signals transmit information about processes inside the body
• Financial signals transmit information about events in the economy
A-2
Welcome to Elec301 – Signals, Systems, and Learning
A-3
Welcome to Elec301 – Signals, Systems, and Learning
Broadly speaking, there are two ways to design systems (and we will study both)
A-4
Welcome to Elec301 – Signals, Systems, and Learning
The field of machine learning is often associated with “artificial intelligence” (AI) and
“data science”
A-5
Signal Processing
Signal processing has traditionally been a part of electrical and computer engineering
But now expands into applied mathematics, statistics, computer science, and a host of
application disciplines
Initially analog signals and systems implemented using resistors, capacitors, inductors, and
transistors (Elec241/242)
Since the 1940s increasingly digital signals and systems implemented using computers and
computer code (Matlab, Python, C, . . . )
• Advantages of digital include stability and programmability
• As computers have shrunk, digital signal processing has become ubiquitous
• Digital processing can be tuned using training data (machine learning)
A-6
Signal Processing Applications
A-7
Rice ELEC301
Goals: Develop intuition into and learn how to reason analytically about signal processing and
machine learning problems
Video lectures (first half of course), primary sources, supplemental materials, practice exercises,
homework, coding exercises, group competition, final exam
A-8
Before We Start
• Linear algebra (vectors, matrices, dot products, eigenvectors and eigenvalues, singular value
decomposition (SVD), bases . . . )
• Matlab or Python
To test your readiness or refresh your knowledge, visit the “Pre-class Mathematics Refresher”
A-9
Administrata
Piazza: https://piazza.com/rice/fall2016/elec301/home
A-10
Welcome to Elec301 – Signals, Systems, and Learning
A-11
Discrete Time Signals
Signals
DEFINITION
Examples:
• Speech signals transmit language via acoustic waves
• Radar signals transmit the position and velocity of targets via electromagnetic waves
• Electrophysiology signals transmit information about processes inside the body
• Financial signals transmit information about events in the economy
A-13
Signals are Functions
DEFINITION
2
x[n]
0
-2
-1 0 1 2 3 4 5 6 7
n
A-14
Plotting Real Signals
When x[n] ∈ R (ex: temperature in a room at noon on Monday), we use one signal plot
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
We will typically place the dependent variable label to the top of the plot.
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
A-15
A Menagerie of Signals
Google Share daily share price for 5 months
x[n]
500
0
0 50 100 150
n
20
0
0 50 100 150 200 250 300 350
n
To plot a discrete-time signal in a program like Matlab, you should use the stem or similar
command and not the plot command
Correct:
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
Incorrect:
x[n]
1
-1
-15 -10 -5 0 5 10 15
n A-17
Plotting Complex Signals
√ √
Here j = −1 (engineering notation; mathematicians use i = −1)
When x[n] ∈ C (ex: magnitude and phase of an electromagnetic wave), we use two signal plots
A-18
Plotting Complex Signals (Rectangular Form)
Re {x[n]}
2
1
0
-1
-15 -10 -5 0 5 10 15
n
Im {x[n]}
2
-2
-15 -10 -5 0 5 10 15
n
A-19
Plotting Complex Signals (Polar Form)
|x[n]|
2
1
0
-15 -10 -5 0 5 10 15
n
\x[n]
4
2
0
-2
-4
-15 -10 -5 0 5 10 15
n
A-20
Summary
Discrete-time signals
• Independent variable is an integer: n ∈ Z (will refer to as time)
• Dependent variable is a real or complex number: x[n] ∈ R or C
A-21
Signal Properties
Signal Properties
Infinite/finite-length signals
Periodic signals
Causal signals
Even/odd signals
Digital signals
A-23
Finite/Infinite-Length Signals
An infinite-length discrete-time signal x[n] is defined for all n ∈ Z, i.e., −∞ < n < ∞
x[n]
1
... 0 ...
-1
-15 -10 -5 0 5 10 15
n
A finite-length discrete-time signal x[n] is defined only for a finite range of N1 ≤ n ≤ N2
x[n]
1
-1
N1 N2
n
Important: a finite-length signal is undefined for n < N1 and n > N2
A-24
Periodic Signals
DEFINITION
x[n + mN ] = x[n] ∀ m ∈ Z
x[n]
4
0
-15 -10 -5 0 5 10 15 20
Notes: n
The period N must be an integer
A periodic signal is infinite in length
DEFINITION
A-25
Converting between Finite and Infinite-Length Signals
• periodization
A-26
Windowing
(
x[n] N1 ≤ n ≤ N2
Converts a longer signal into a shorter one y[n] =
0 otherwise
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
y[n]
1
-1
-15 -10 N1 N2 15
n
A-27
Zero Padding
Converts a shorter signal into a longer one
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
y[n]
1
-1
N0 N1 N2 N3
n A-28
Periodization
Converts a finite-length signal into an infinite-length, periodic signal
0
0 1 2 3 4 5 6 7
n
y[n] with period N = 8
4
0
-15 -10 -5 0 5 10 15 20
n
A-29
Useful Aside – Modular Arithmetic
Modular arithmetic with modulus N (mod-N ) takes place on a wheel with N time locations
• Ex: (12)8 (“twelve mod eight”)
A-30
Periodization via Modular Arithmetic
Consider a length-N signal x[n] defined for 0 ≤ n ≤ N − 1
2 2
0 0
0 1 2 3 4 5 6 7 -15 -10 -5 0 5 10 15 20
n n
Important interpretation
• Infinite-length signals live on
the (infinite) number line
• Periodic signals live on a circle
– a wheel with N time locations
A-31
Finite-Length and Periodic Signals are Equivalent
x[n] y[n] with period N = 8
4 4
2 2
0 0
0 1 2 3 4 5 6 7 -15 -10 -5 0 5 10 15 20
n n
All of the information in a periodic signal is contained in one period (of finite length)
Conclusion: We can and will think of finite-length signals and periodic signals interchangeably
We can choose the most convenient viewpoint for solving any given problem
• Application: Shifting finite length signals
A-32
Causal Signals
DEFINITION
x[n]
1
0.5
0
-10 -5 0 5 10 15
n
x[n]
1
0.5
0
-10 -5 0 5 10 15
n
A signal x[n] is acausal if it is not causal
A-33
Even Signals
DEFINITION
x[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-34
Odd Signals
DEFINITION
x[n]
0.5
-0.5
-15 -10 -5 0 5 10 15
n
A-35
Even+Odd Signal Decomposition
Useful fact: Every signal x[n] can be decomposed into the sum of its even part + its odd part
1
Even part: e[n] = 2 (x[n] + x[−n]) (easy to verify that e[n] is even)
1
Odd part: o[n] = 2 (x[n] − x[−n]) (easy to verify that o[n] is odd)
A-36
Even+Odd Signal Decomposition in Pictures
x[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-37
Even+Odd Signal Decomposition in Pictures
1
2
0.5
0
+ 0.5
0
= 0.5
0
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n n
x[n] x[−n]
! o[n]
1 1 0.5
1
2
0.5
0
− 0.5
0
= 0
-0.5
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n n
x[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-38
Digital Signals
Digital signals are a special sub-class of discrete-time signals
• Independent variable is still an integer: n ∈ Z
• Typically, choose D = 2q and represent each possible level of x[n] as a digital code with q bits
Q(x[n])
3
2
1
0
-15 -10 -5 0 5 10 15
n
• Ex: Compact discs use q = 16 bits ⇒ D = 216 = 65536 levels
A-39
Summary
A-40
Shifting Signals
Shifting Infinite-Length Signals
Given an infinite-length signal x[n], we can shift it back and forth in time via x[n − m], m ∈ Z
x[n]
1
-1
0 10 20 30 40 50 60
n
When m > 0, x[n − m] shifts to the right (forward in time, delay)
x[n − 10]
1
-1
0 10 20 30 40 50 60
n
When m < 0, x[n − m] shifts to the left (back in time, advance)
x[n + 10]
1
-1
0 10 20 30 40 50 60
n
A-42
Periodic Signals and Modular Arithmetic
A convenient way to express a signal y[n] that is periodic with period N is
y[n] = x[(n)N ], n∈Z
where x[n], defined for 0 ≤ n ≤ N − 1, comprises one period
x[n] y[n] with period N = 8
4 4
2 2
0 0
0 1 2 3 4 5 6 7 -15 -10 -5 0 5 10 15 20
n n
Important interpretation
• Infinite-length signals live on
the (infinite) number line
• Periodic signals live on a circle
– a wheel with N time places
A-43
Shifting Periodic Signals
Periodic signals can also be shifted; consider y[n] = x[(n)N ]
0
-15 -10 -5 0 5 10 15 20
n
0
-15 -10 -5 0 5 10 15 20
n
A-44
Shifting Finite-Length Signals
Consider finite-length signals x and v defined for 0 ≤ n ≤ N − 1 and suppose “v[n] = x[n − 1]”
v[0] = ??
v[1] = x[0]
v[2] = x[1]
v[3] = x[2]
..
.
v[N − 1] = x[N − 2]
?? = x[N − 1]
What to put in v[0]? What to do with x[N − 1]? We don’t want to invent/lose information
Elegant solution: Assume x and v are both periodic with period N ; then v[n] = x[(n − 1)N ]
This is called a periodic or circular shift (see circshift and mod in Matlab)
A-45
Circular Shift Example
Elegant formula for circular shift of x[n] by m time steps: x[(n − m)N ]
Ex: x and v defined for 0 ≤ n ≤ 7, that is, N = 8. Find v[n] = x[(n − 3)8 ]
A-46
Circular Shift Example
Elegant formula for circular shift of x[n] by m time steps: x[(n − m)N ]
Ex: x and v defined for 0 ≤ n ≤ 7, that is, N = 8. Find v[n] = x[(n − m)N ]
v[0] = x[5]
v[1] = x[6]
v[2] = x[7]
v[3] = x[0]
v[4] = x[1]
v[5] = x[2]
v[6] = x[3]
v[7] = x[4]
A-47
Circular Time Reversal
For infinite length signals, the transformation of reversing the time axis x[−n] is obvious
Not so obvious for periodic/finite-length signals
Elegant formula for reversing the time axis of a periodic/finite-length signal: x[(−n)N ]
Ex: x and v defined for 0 ≤ n ≤ 7, that is, N = 8. Find v[n] = x[(−n)N ]
v[0] = x[0]
v[1] = x[7]
v[2] = x[6]
v[3] = x[5]
v[4] = x[4]
v[5] = x[3]
v[6] = x[2]
v[7] = x[1]
A-48
Summary
A-49
Key Test Signals
A Toolbox of Test Signals
Delta function
Unit step
Unit pulse
Real exponential
A-51
Delta Function
DEFINITION
(
1 n=0
The delta function (aka unit impulse) δ[n] =
0 otherwise
δ[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
The shifted delta function δ[n − m] peaks up at n = m; here m = 9
δ[n − 9]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-52
Delta Functions Sample
Multiplying a signal by a shifted delta function picks out one sample of the signal and sets all
other samples to zero
y[n] = x[n] δ[n − m] = x[m] δ[n − m]
Important: m is a fixed constant, and so x[m] is a constant (and not a signal)
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
δ[n − 9]
1
0.5
0
-15 -10 -5 0 5 10 15
n
y[n] = x[9]δ[n − 9]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-53
Unit Step
DEFINITION
(
1 n≥0
The unit step u[n] =
0 n<0
u[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
The shifted unit step u[n − m] jumps from 0 to 1 at n = m; here m = 5
u[n − 5]
1
0.5
0
-15 -10 -5 0 5 10 15
n
A-54
Unit Step Selects Part of a Signal
Multiplying a signal by a shifted unit step function zeros out its entries for n < m
y[n] = x[n] u[n − m]
(Note: For m = 0, this makes y[n] causal)
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
u[n − 5]
1
0.5
0
-15 -10 -5 0 5 10 15
n
y[n] = x[n]u[n − 5]
1
-1
-15 -10 -5 0 5 10 15
n
A-55
Unit Pulse
0 n < N1
DEFINITION
The unit pulse (aka boxcar) p[n] = 1 N1 ≤ n ≤ N2
0 n > N2
0.5
0
-15 -10 -5 0 5 10 15
n
One of many different formulas for the unit pulse
p[n] = u[n − N1 ] − u[n − (N2 + 1)]
A-56
Real Exponential
DEFINITION
For a > 1, r[n] shrinks to the left and grows to the right; here a = 1.1
r[n]
4
0
-15 -10 -5 0 5 10 15
n
For 0 < a < 1, r[n] grows to the left and shrinks to the right; here a = 0.9
r[n]
5
0
-15 -10 -5 0 5 10 15
n
A-57
Summary
We will use our test signals often, especially the delta function and unit step
A-58
Sinusoids
Sinusoids
We will introduce
• Real-valued sinusoids
• (Complex) sinusoid
• Complex exponential
A-60
Sinusoids
There are two natural real-valued sinusoids: cos(ωn + φ) and sin(ωn + φ)
Frequency: ω (units: radians/sample)
Phase: φ (units: radians)
cos(ωn) (even)
cos[ωn]
1
-1
-15 -10 -5 0 5 10 15
n
sin(ωn) (odd)
sin[ωn]
1
-1
-15 -10 -5 0 5 10 15
n
A-61
Sinusoid Examples
1
0.5
cos(0n)
0
-15 -10 -5 0 5 10 15
n
1
0
sin(0n)
-1
-15 -10 -5 0 5 10 15
n
1
0
sin( π4 n + 2π
6 ) -1
-15 -10 -5 0 5 10 15
n
1
0
cos(πn)
-1
-15 -10 -5 0 5 10 15
n
A-62
Get Comfortable with Sinusoids!
It’s easy to play around in Matlab to get comfortable with the properties of sinusoids
N=36;
n=0:N-1;
omega=pi/6;
phi=pi/4;
x=cos(omega*n+phi);
stem(n,x)
1
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 5 10 15 20 25 30 35
A-63
Complex Sinusoid
The complex-valued sinusoid combines both the cos and sin terms (via Euler’s identity)
0.5 0
0 -1
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
A-64
A Complex Sinusoid is a Helix
A-65
Complex Sinusoid is a Helix (Animation)
A-66
Negative Frequency
Negative frequency is nothing to be afraid of! Consider a sinusoid with a negative frequency −ω
Bottom line: negating the frequency is equivalent to complex conjugating a complex sinusoid,
which flips the sign of the imaginary, sin term
Re(ejωn ) = cos(ωn) Re(e−jωn ) = cos(ωn)
1 1
0 0
-1 -1
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
jωn
n −jωn
n
Im(e ) = sin(ωn) Im(e ) = − sin(ωn)
1 1
0 0
-1 -1
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
A-67
Phase of a Sinusoid ej(ωn+φ)
φ is a (frequency independent) shift that is referenced to one period of oscillation
1
0
π
cos 6n −0
-1
-15 -10 -5 0 5 10 15
n
1
0
π π
cos 6n − 4 -1
-15 -10 -5 0 5 10 15
n
1
0
π π π
cos 6n − 2 = sin 6n -1
-15 -10 -5 0 5 10 15
n
1
0
π π
cos 6n − 2π = cos 6n -1
-15 -10 -5 0 5 10 15
n A-68
Summary
Sinusoids play a starring role in both the theory and applications of signals and systems
A complex sinusoid is a helix in three-dimensional space and naturally induces the sine and cosine
Negative frequency is nothing to be scared by; it just means that the helix spins backwards
A-69
Discrete-Time Sinusoids Are Weird
Discrete-Time Sinusoids are Weird!
A-71
Weird Property #1: Aliasing of Sinusoids
Consider two sinusoids with two different frequencies
• ω ⇒ x1 [n] = ej(ωn+φ)
• ω + 2π ⇒ x2 [n] = ej((ω+2π)n+φ)
Note: Any integer multiple of 2π will do; try with x3 [n] = ej((ω+2πm)n+φ) , m ∈ Z
A-72
Aliasing of Sinusoids – Example
π
x1 [n] = cos 6n
-1
-15 -10 -5 0 5 10 15
n
13π
= cos ( π6 + 2π)n
x2 [n] = cos 6 n
-1
-15 -10 -5 0 5 10 15
n
A-73
Alias-Free Frequencies
Since
the only frequencies that lead to unique (distinct) sinusoids lie in an interval of length 2π
• 0 ≤ ω < 2π
• −π < ω ≤ π
A-74
Low and High Frequencies
ej(ωn+φ)
Low frequencies: ω close to 0 or 2π rad
π
Ex: cos 10 n
1
-1
-15 -10 -5 0 5 10 15
n
High frequencies: ω close to π or −π rad
Ex: cos 9π
10 n
1
-1
-15 -10 -5 0 5 10 15
n
A-75
Weird Property #2: Periodicity of Sinusoids
2πk
Consider x1 [n] = ej(ωn+φ) with frequency ω = N , k, N ∈ Z (harmonic frequency)
-1
-15 -10 -5 0 5 10 15
n
N N
Note: x1 is periodic with the (smaller) period of k when k is an integer
A-76
Aperiodicity of Sinusoids
2πk
Consider x2 [n] = ej(ωn+φ) with frequency ω 6= N , k, N ∈ Z (not harmonic frequency)
Is x2 periodic?
-1
-15 -10 -5 0 5 10 15
n
If its frequency ω is not harmonic, then a sinusoid oscillates but is not periodic!
A-77
Harmonic Sinusoids
ej(ωn+φ)
Semi-amazing fact: The only periodic discrete-time sinusoids are those with
harmonic frequencies
2πk
ω= , k, N ∈ Z
N
• The harmonic sinusoids are somehow magical (they play a starring role later in the DFT)
A-78
Harmonic Sinusoids (Matlab)
A-79
Summary
2πk
The only sinusoids that are periodic: Harmonic sinusoids ej( N n+φ) , n, k, N ∈ Z
A-80
Complex Exponentials
Complex Exponential
Complex sinusoid ej(ωn+φ) is of the form ePurely Imaginary Numbers
• |z| = magnitude of z
• ω = ∠(z), phase angle of z
• Can visualize z ∈ C as a point in the complex plane
Now we have
z n = (|z|ejω )n = |z|n (ejω )n = |z|n ejωn
A-82
Complex Exponential is a Spiral
n
zn = |z| ejω = |z|n ejωn
A-83
Complex Exponential is a Spiral
n
zn = |z| ejωn = |z|n ejωn
n
n n
n
Im(z ), |z| < 1 Im(z ), |z| > 1
4 2
2 0
0 -2
-2
-4
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
A-84
Complex Exponentials and z Plane (Matlab)
A-85
Summary
A-86
Signals are Vectors
Set B
Signals are Vectors
Signals are Vectors
By interpreting signals as vectors in a vector space, we will be able to speak about the length of
a signal (its “strength,” more below), angles between signals (their similarity), and more
We will also be able to use matrices to better understand how signal processing systems work
B-2
Vector Space
scalar then
αx ∈ V and x+y ∈V
In words:
• A rescaled vector stays in the vector space
• The sum of two vectors stays in the vector space
B-3
The Vector Space R2 (1)
2 x[0] y[0]
Vectors in R : x = , y= , x[0], x[1], y[0], y[1] ∈ R
x[1] y[1]
• Note: We will enumerate the entries of a vector starting from 0 rather than 1
(this is the convention in signal processing and programming languages like ”C”, but not in Matlab)
• Note: We will not use the traditional boldface or underline notation for vectors
Scalars: α ∈ R
x[0] α x[0]
Scaling: α x = α =
x[1] α x[1]
B-4
The Vector Space R2 (2)
x[0] y[0]
Vectors in R2 : x = , y= , x[0], x[1], y[0], y[1] ∈ R
x[1] y[1]
Scalars: α ∈ R
x[0] y[0] x[0] + y[0]
Summing: x + y = + =
x[1] y[1] x[1] + y[1]
B-5
The Vector Space RN
x[0] 1
x[n]
x[1]
Vectors in RN : x = , x[n] ∈ R 0
..
. -1
0 5 10 15 20 25 30
x[N − 1] n
This is exactly the same as a real-valued discrete time signal; that is, signals are vectors
• Scaling αx amplifies/attenuates a signal by the factor α
• Summing x + y creates a new signal that mixes x and y
RN is harder to visualize than R2 and R3 , but the intuition gained from R2 and R3 generally
holds true with no surprises (at least in this course)
B-6
The Vector Space RN – Scaling
Signal x[n]
x[n]
1
-1
0 5 10 15 20 25 30
n
Scaled signal 3 x[n]
3 x[n]
3
-1
-2
-3
0 5 10 15 20 25 30
n B-7
The Vector Space RN – Summing
Signal x[n]
x[n]
2
1
0
-1
0 5 10 15 20 25 30
n
Signal y[n]
y[n]
2
1
0
-1
0 5 10 15 20 25 30
n
Sum x[n] + y[n]
x[n] + y[n]
2
1
0
-1
0 5 10 15 20 25 30
n B-8
The Vector Space CN (1)
x[0]
x[1]
Vectors in CN : x = , x[n] ∈ C
..
.
x[N − 1]
Scalars α ∈ C
B-9
The Vector Space CN (2)
Rectangular form
Re{x[0]} + j Im{x[0]}
x[0]
x[0]
Re{x[1]} + j Im{x[1]} x[1] x[1]
x= = Re + j Im
.. .. ..
.
.
.
Re{x[N − 1]} + j Im{x[N − 1]} x[N − 1] x[N − 1]
Polar form
|x[0]| ej∠x[0]
|x[1]| ej∠x[1]
x=
..
.
|x[N − 1]| ej∠x[N −1]
B-10
Summary
B-11
Linear Combinations of Signals
Linear Combination of Signals
M
X −1
y = α0 x0 + α1 x1 + · · · + αM −1 xM −1 = αm xm
m=0
B-13
Linear Combination Example
A recording studio uses a mixing board (or desk)
to create a linear combination of the signals from
the different instruments that make up a song
B-14
Linear Combination = Matrix Multiplication
Step 1: Stack the vectors xm ∈ CN as column vectors into an N × M matrix
X = x0 |x1 | · · · |xM −1
B-15
Linear Combination = Matrix
Multiplication
(The Gory Details)
xm [0]
xm [1]
M vectors in CN : xm = , m = 0, 1, . . . , M − 1
..
.
xm [N − 1]
x0 [0] x1 [0] ··· xM −1 [0]
x0 [1] x1 [1] ··· xM −1 [1]
N × M matrix: X =
.. .. ..
. . .
x0 [N − 1] x1 [N − 1] ··· xM −1 [N − 1]
Note: The row-n, column-m element of the matrix [X]n,m = xm [n]
α0
α1
M scalars αm , m = 0, 1, . . . , M − 1: a = .
..
αM −1
Linear combination y = Xa
B-16
Linear Combination = Matrix Multiplication (Summary)
Linear combination y = Xa
.. .. ..
. . .
y[n] = · · ·
y = xm [n] · · ·
αm = Xa
.. .. ..
. . .
M
X −1
y[n] = αm xm [n]
m=0
B-17
Linear Combination as Matrix Multiplication (Matlab)
Click here to view a MATLAB demo using sound to explore linear combinations.
B-18
Summary
We can combine several signals to form one new signal via a linear combination
B-19
Norm of a Signal
Strength of a Vector
How to quantify the “strength” of a vector?
x[n]
1
0
Signal x
-1
0 5 10 15 20 25 30
n
y[n]
1
0
Signal y
-1
0 5 10 15 20 25 30
n
B-21
Strength of a Vector: 2-Norm
uN −1
uX
kxk2 = t |x[n]|2
n=0
The norm takes as input a vector in CN and produces a real number that is ≥ 0
When it is clear from context, we will suppress the subscript “2” in kxk2 and just write kxk
B-22
2-Norm Example
1
Ex: x = 2
3
`2 norm
v
uN −1
uX p √
kxk2 = t |x[n]|2 = 12 + 22 + 32 = 14
n=0
B-23
Strength of a Vector: p-Norm
The Euclidean length is not the only measure of “strength” of a vector in CN
N −1
!1/p
X
p
kxkp = |x[n]|
n=0
N
X −1
kxk1 = |x[n]|
n=0
B-24
Strength of a Vector: ∞-Norm
kxk∞ is simply the largest entry in the vector x (in absolute value)
x[n]
1
-1
0 5 10 15 20 25 30
n
While kxk22 measures the energy in a signal, kxk∞ measures the peak value (of the magnitude);
both are very useful in applications
If the energy kxk22 is too large, then the coil of wire will melt
from excessive heating
If the peak value kxk∞ is too large, then the large back and
forth excursion of the coil of wire will tear it off of the paper cone
B-26
Physical Significance of Norms (2)
Consider a robotic car that we wish to guide down a roadway
Minimizing ky − xk22 , energy in the error signal, will tolerate a few large deviations from the lane
center (not very safe)
Minimizing ky − xk∞ , the maximum of the error signal, will not tolerate any large deviations
from the lane center (much safer)
B-27
Normalizing a Vector
DEFINITION
1
Normalizing a vector is easy; just scale it by kxk2
1 qP √ √
N −1
Ex: x = 2, ||x||2 = n=0 |x[n]|2 = 12 + 22 + 32 = 14
3
√
1 1/√14
x0 = √114 x = √114 2 = 2/√14, ||x0 ||2 = 1
3 3/ 14
B-28
Summary
Norms measure the “strength” of a signal; we introduced the 2- 1-, and ∞-norms
B-29
Inner Product
The Geometry of Signals
Up to this point, we have developed the viewpoint of “signals as vectors” in a vector space
B-31
Aside: Transpose of a Vector
T
Recall that the transpose operation converts a column vector to a row vector (and vice versa)
T
x[0]
x[1]
= x[0] x[1] · · · x[N − 1]
..
.
x[N − 1]
H
In addition to transposition, the conjugate transpose (aka Hermitian transpose) operation
takes the complex conjugate
H
x[0]
x[1]
x[0]∗ x[1]∗ x[N − 1]∗
= ···
..
.
x[N − 1]
B-32
Inner Product
The inner product (or dot product) between two vectors x, y ∈ CN is given by
DEFINITION
N
X −1
hx, yi = y H x = x[n] y[n]∗
n=0
The inner product takes two signals (vectors in CN ) and produces a single (complex) number
1
2 3
Consider two vectors in R : x = , y=
2 2
kxk22 = 12 + 22 = 5, kyk22 = 32 + 22 = 13
θx,y = arccos 1×3
√ +√ 2×2
5 13
= arccos √7
65
≈ 0.519 rad ≈ 29.7◦
B-34
Inner Product Example 2
x[n]
1
0
Signal x
-1
0 5 10 15 20 25 30
n
y[n]
1
0
Signal y
-1
0 5 10 15 20 25 30
n
B-35
2-Norm from Inner Product
B-36
Orthogonal Vectors
hx, yi = 0
hx, yi = 0 ⇒ θx,y = π
2 rad = 90◦
1 1
0.5 0.5
0 0
0 5 10 15 20 0 5 10 15 20 25
n n
1 1
0.5 0
0 -1
0 5 10 15 20 0 5 10 15 20 25
n n
B-37
Harmonic Sinusoids are Orthogonal
2πk
sk [n] = ej N n , n, k, N ∈ Z, 0 ≤ n ≤ N − 1, 0 ≤ k ≤ N − 1
B-38
Harmonic Sinusoids are Orthogonal (Matlab)
Click here to view a MATLAB demo exploring the orthogonality of harmonic sinusoids.
B-39
Normalizing Harmonic Sinusoids
2πk
sk [n] = ej N n , n, k, N ∈ Z, 0 ≤ n ≤ N − 1, 0 ≤ k ≤ N − 1
√
Claim: ksk k2 = N
B-40
Summary
B-41
Matrix Multiplication
and Inner Product
Recall: Matrix Multiplication as a Linear Combination of Columns
Consider the (real- or complex-valued) matrix multiplication y = Xa
B-43
Visualizing Matrix Multiplication as a Linear Combination of Columns
aT
= =
y X a X
B-44
Matrix Multiplication as a Sequence of Inner Products of Rows
Consider the real-valued matrix multiplication y = Xa
We can compute each element y[n] in y as the inner product of the n-th row of X with the
vector a
α
.. .. .. ..
0
. . . . α1
y = y[n] = x0 [n] x1 [n] · · · xM −1 [n] .. = Xa
.. .. .. .. .
. . . . αM −1
B-45
Visualizing Matrix Multiplication as a Sequence of Inner Products of Rows
We can compute each element y[n] in y as the inner product of the n-th row of X with the
vector a
aT
= =
y X a X
B-46
Matrix Multiplication as a Sequence of Inner Products of Rows
The same interpretation works, but we need to use the following “inner product”
M
X −1
y[n] = αm xm [n] 6= h(row n of X)T , ai, 0≤n≤N −1
m=0
Note: This is nearly the inner product for complex signals except that is lacking the
complex conjugation
B-47
Summary
Given the matrix/vector product y = Xa, we can compute each element y[n] in y as the
inner product of the n-th row of X with the vector a
Not strictly true for complex matrices/vectors, but the interpretation is useful nevertheless
B-48
Cauchy Schwarz Inequality
Comparing Signals
Inner product and angle between vectors enable us to compare signals
N
X −1
hx, yi = y H x = x[n] y[n]∗
n=0
Re{hx, yi}
cos θx,y =
kxk2 kyk2
B-50
Cauchy-Schwarz Inequality (1)
hx,yi
Recall that cos θx,y = kxk2 kyk2
hx, yi
0 ≤ ≤ 1
kxk2 kyk2
B-51
Cauchy-Schwarz Inequality (2)
• Lower bound: hx, yi = 0 or θx,y = 90◦ : x and y are most different when they are orthogonal
• Upper bound: hx, yi = kxk2 kyk2 or θx,y = 0◦ : x and y are most similar when they are collinear
(aka linearly dependent, y = αx)
B-52
Cauchy-Schwarz Inequality Applications
How does a digital communication system decide whether the signal corresponding to a “0” was
transmitted or the signal corresponding to a “1”? (Hint: CSI)
How does a radar or sonar system find targets in the signal it receives after transmitting a pulse?
(Hint: CSI)
How do many computer vision systems find faces in images? (Hint: CSI)
B-53
Cauchy-Schwarz Inequality (Matlab)
B-54
Summary
hx, yi
0≤ ≤1
kxk2 kyk2
B-55
Infinite-Length Vectors (Signals)
From Finite to Infinite-Length Vectors
Up to this point, we have developed some useful tools for dealing with finite-length vectors
(signals) that live in RN or CN : Norms, Inner product, Linear combination
It turns out that these tools can be generalized to infinite-length vectors (sequences) by letting
N → ∞ (infinite-dimensional vector space, aka Hilbert Space)
.
..
x[−2] 2
x[n]
0
x[−1] -2
-1 0 1 2 3 4 5 6 7
x[n], − ∞ < n < ∞, x= x[0]
n
x[1]
x[2]
..
.
Obviously such a signal cannot be loaded into Matlab; however this viewpoint is still useful in
many situations
We will spell out the generalizations with emphasis on what changes from the finite-length case
B-57
2-Norm of an Infinite-Length Vector
u ∞
u X
kxk2 = t |x[n]|2
n=−∞
When it is clear from context, we will suppress the subscript “2” in kxk2 and just write kxk
What changes from the finite-length case: Not every infinite-length vector has a finite 2-norm
B-58
`2 Norm of an Infinite-Length Vector – Example
0.5
Signal: x[n] = 1, −∞<n<∞
0
-15 -10 -5 0 5 10 15
n
2-norm:
∞
X ∞
X
kxk22 = |x[n]|2 = 1 = ∞
n=−∞ n=−∞
Infinite energy!
B-59
p- and 1-Norms of an Infinite-Length Vector
∞
!1/p
X
p
kxkp = |x[n]|
n=−∞
∞
X
kxk1 = |x[n]|
n=−∞
What changes from the finite-length case: Not every infinite-length vector has a finite p-norm
B-60
1- and 2-Norms of an Infinite-Length Vector – Example
1
(
0 n≤0 0.5
Signal: x[n] = 1
n n≥1 0
-5 0 5 10 15 20 25
n
1-norm
∞ ∞
X X 1
kxk1 = |x[n]| = = ∞
n=−∞ n=1
n
2-norm
∞ ∞ 2 ∞
X X 1 X 1 π2
kxk22 = |x[n]|2 = = 2
= ≈ 1.64 < ∞
n=−∞ n=1
n n=1
n 6
B-61
∞-Norm of an Infinite-Length Vector
What changes from the finite-length case: “sup” is a generalization of max to infinite-length
signals that lies beyond the scope of this course
1
0.5
In both of the above examples, kxk∞ = 1 0
-5 0 5 10 15 20 25
n
B-62
Inner Product of Infinite-Length Signals
∞
X
hx, yi = x[n] y[n]∗
n=−∞
The inner product takes two signals and produces a single (complex) number
B-63
Linear Combination of Infinite-Length Vectors
What changes from the finite-length case: We will be especially interested in linear combinations
of infinitely many infinite-length vectors
∞
X
y= αm xm
m=−∞
B-64
Linear Combination = Infinite Matrix Multiplication
Step 1: Stack the vectors xm as column vectors into a “matrix” with infinitely many rows and
columns
X = · · · |x−1 |x0 |x1 | · · ·
.
.
.
α−1
Step 2: Stack the scalars αm into an infinitely tall column vector a =
α0
α1
..
.
Linear combination = Xa
B-66
Linear Combination = Infinite Matrix Multiplication (Summary)
Linear combination y = Xa
The row-n, column-m element of the infinitely large matrix [X]n,m = xm [n]
.. .. ..
. . .
y[n] = · · ·
y = xm [n] · · ·
αm = Xa
.. .. ..
. . .
B-67
Summary
Linear algebra concepts like norm, inner product, and linear combination work apply as well to
infinite-length signals as with finite-length signals
B-68
Systems
Set C
Systems
Systems are Transformations
y = H{x}
x H y
C-2
Signal Length and Systems
x H y
Recall that there are two kinds of signals: infinite-length and finite-length
For generality, we will assume that the input and output signals are complex valued
C-3
System Examples (1)
Identity
y[n] = x[n] ∀n
Scaling
y[n] = 2 x[n] ∀n
Offset
y[n] = x[n] + 2 ∀n
Square signal
y[n] = (x[n])2 ∀n
Shift
y[n] = x[n + 2] ∀n
Decimate
y[n] = x[2n] ∀n
Square time
y[n] = x[n2 ] ∀n
C-4
x[n]
System Examples (2) 1
-1
-15 -10 -5 0 5 10 15
n
Shift system (m ∈ Z fixed)
y[n] = x[n − m] ∀n
Recursive average
y[n] = x[n] + α y[n − 1] ∀n
C-5
Summary
C-6
Linear Systems
Linear Systems
x H y αx H αy
DEFINITION
2 Additivity
If y1 = H{x1 } and y2 = H{x2 } then
H{x1 + x2 } = y1 + y2
x1 H y1 x2 H y2
x1 + x2 H y1 + y2
C-8
Linearity Notes
To prove that a system is linear, you must prove rigorously that it has both the scaling and
additivity properties for arbitrary input signals
C-9
Example: Moving Average is Linear (Scaling)
Scaling: (Strategy to prove – Scale input x by α ∈ C, compute output y via the formula at top,
and verify that it is scaled as well)
• Let
x0 [n] = αx[n], α∈C
• Then
1 0 1 1
y 0 [n] = (x [n] + x0 [n − 1]) = (αx[n] + αx[n − 1]) = α (x[n] + x[n − 1]) = αy[n] X
2 2 2
C-10
Example: Moving Average is Linear (Additivity)
x[n] H y[n] = 21 (x[n] + x[n − 1])
Additivity: (Strategy to prove – Input two signals into the system and verify that the output
equals the sum of the respective outputs)
• Let
x0 [n] = x1 [n] + x2 [n]
• Let y 0 /y1 /y2 denote the output when x0 /x1 /x2 is input
• Then
1 0 1
y 0 [n] = (x [n] + x0 [n − 1]) = ({x1 [n] + x2 [n]} + {x1 [n − 1] + x2 [n − 1]})
2 2
1 1
= (x1 [n] + x1 [n − 1]) + (x2 [n] + x2 [n − 1]) = y1 [n] + y2 [n] X
2 2
C-11
Example: Squaring is Nonlinear
2
x[n] H y[n] = (x[n])
Additivity: Input two signals into the system and see what happens
• Let
y1 [n] = (x1 [n])2 , y2 [n] = (x2 [n])2
• Set
x0 [n] = x1 [n] + x2 [n]
• Then
2
y 0 [n] = x0 [n] = (x1 [n] + x2 [n])2 = (x1 [n])2 + 2x1 [n]x2 [n] + (x2 [n])2 6= y1 [n] + y2 [n]
• Nonlinear!
C-12
Linear or Nonlinear? You Be the Judge! (1)
Identity
y[n] = x[n] ∀n
Scaling
y[n] = 2 x[n] ∀n
Offset
y[n] = x[n] + 2 ∀n
Square signal
y[n] = (x[n])2 ∀n
Shift
y[n] = x[n + 2] ∀n
Decimate
y[n] = x[2n] ∀n
Square time
y[n] = x[n2 ] ∀n
C-13
Linear or Nonlinear? You Be the Judge! (2)
Recursive average
y[n] = x[n] + α y[n − 1] ∀n
C-14
Matrix Multiplication and Linear Systems
Matrix multiplication (aka Linear Combination) is a fundamental signal processing system
Fact 1: Matrix multiplications are linear systems (easy to show at home, but do it!)
y = Hx
X
y[n] = [H]n,m x[m]
m
(Note: This formula applies for both infinite-length and finite-length signals)
As a result, we will use the matrix viewpoint of linear systems extensively in the sequel
Try at home: Express all of the linear systems in the examples above in matrix form
C-15
Matrix Multiplication and Linear Systems in Pictures
Linear system
y = Hx
X X
y[n] = [H]n,m x[m] = hn,m x[m]
m m
where hn,m = [H]n,m represents the row-n, column-m entry of the matrix H
y H x
C-16
System Output as a Linear Combination of Columns
Linear system
y = Hx
X X
y[n] = [H]n,m x[m] = hn,m x[m]
m m
where hn,m = [H]n,m represents the row-n, column-m entry of the matrix H
y H x
= =
C-17
System Output as a Sequence of Inner Products
Linear system
y = Hx
X X
y[n] = [H]n,m x[m] = hn,m x[m]
m m
where hn,m = [H]n,m represents the row-n, column-m entry of the matrix H
y H x
C-18
Summary
To show a system is linear, you have to prove it rigorously assuming arbitrary inputs (work!)
To show a system is nonlinear, you can just exhibit a counterexample (often easy!)
• The output signal y equals the linear combination of the columns of H weighted by the entries in x
• Alternatively, the output value y[n] equals the inner product between row n of H with x
C-19
Time-Invariant Systems
Time-Invariant Systems (Infinite-Length Signals)
x[n] H y[n]
x[n − q] H y[n − q]
Intuition: A time-invariant system behaves the same no matter when the input is applied
C-21
Example: Moving Average is Time-Invariant
Let
x0 [n] = x[n − q], q∈Z
Then
1 0 1
y 0 [n] = (x [n] + x0 [n − 1]) = (x[n − q] + x[n − q − 1]) = y[n − q] X
2 2
C-22
Example: Decimation is Time-Varying
Let
x0 [n] = x[n − 1]
Then
y 0 [n] = x0 [2n] = x[2n − 1] 6= x[2(n − 1)] = y[n − 1]
C-23
Time-Invariant or Time-Varying? You Be the Judge! (1)
Identity
y[n] = x[n] ∀n
Scaling
y[n] = 2 x[n] ∀n
Offset
y[n] = x[n] + 2 ∀n
Square signal
y[n] = (x[n])2 ∀n
Shift
y[n] = x[n + 2] ∀n
Decimate
y[n] = x[2n] ∀n
Square time
y[n] = x[n2 ] ∀n
C-24
Time-Invariant or Time-Varying? You Be the Judge! (2)
Recursive average
y[n] = x[n] + α y[n − 1] ∀n
C-25
Time-Invariant Systems (Finite-Length Signals)
x[n] H y[n]
Intuition: A time-invariant system behaves the same no matter when the input is applied
C-26
Summary
Time-invariant systems behave the same no matter when the input is applied
To show a system is time-invariant, you have to prove it rigorously assuming arbitrary inputs
(work!)
To show a system is time-varying, you can just exhibit a counterexample (often easy!)
C-27
Linear Time-Invariant Systems
Linear Time Invariant (LTI) Systems
DEFINITION
LTI systems are the foundation of signal processing and the main subject of this course
C-29
LTI or Not? You Be the Judge! (1)
Identity
y[n] = x[n] ∀n
Scaling
y[n] = 2 x[n] ∀n
Offset
y[n] = x[n] + 2 ∀n
Square signal
y[n] = (x[n])2 ∀n
Shift
y[n] = x[n + 2] ∀n
Decimate
y[n] = x[2n] ∀n
Square time
y[n] = x[n2 ] ∀n
C-30
LTI or Not? You Be the Judge! (2)
Recursive average
y[n] = x[n] + α y[n − 1] ∀n
C-31
Matrix Multiplication and LTI Systems (Infinite-Length Signals)
Recall that all linear systems can be expressed as matrix multiplications
y = Hx
X
y[n] = [H]n,m x[m]
m
Let hn,m = [H]n,m represent the row-n, column-m entry of the matrix H
X
y[n] = hn,m x[m]
m
When the linear system is also shift invariant, H has a special structure
C-32
Matrix Structure of LTI Systems (Infinite-Length Signals)
Linear system for infinite-length signals can be expressed as
∞
X
y[n] = H{x[n]} = hn,m x[m], −∞ < n < ∞
m=−∞
Comparing first and third equations, we see that for an LTI system
hn,m = hn+q,m+q ∀q ∈Z
C-33
LTI Systems are Toeplitz Matrices (Infinite-Length Signals) (1)
hn,m = hn+q,m+q ∀q ∈Z
.. .. .. .. .. ..
. . . . . .
· · · h−1,−1 h−1,0 h−1,1 · · · · · · h0,0 h−1,0 h−2,0 · · ·
· · ·
H = h0,−1 h0,0 h0,1 · · · =
· · · h1,0 h0,0 h−1,0 · · ·
· · · h1,−1 h1,0 h1,1 · · · · · · h2,0 h1,0 h0,0 · · ·
.. .. .. .. .. ..
. . . . . .
C-34
LTI Systems are Toeplitz Matrices (Infinite-Length Signals) (2)
All of the entries in a Toeplitz matrix can be expressed in terms of the entries of the
.. .. .. .. .. ..
. . . . . .
· · · h0,0 h−1,0 h−2,0 · · · · · · h[0] h[−1] h[−2] · · ·
· · ·
H = h1,0 h0,0 h−1,0 · · · =
· · · h[1] h[0] h[−1] · · ·
· · · h2,0 h1,0 h0,0 · · · · · · h[2] h[1] h[0] · · ·
.. .. .. .. .. ..
. . . . . .
C-35
LTI Systems are Toeplitz Matrices (Infinite-Length Signals) (3)
All of the entries in a Toeplitz matrix can be expressed in terms of the entries of the
• 0-th column: h[n] = hn,0 (this is an infinite-length signal/column vector; call it h)
• Time-reversed 0-th row: h[m] = h0,−m
Example: Snippet of
a Toeplitz matrix
[H]n,m = hn,m
= h[n − m] H = h =
C-36
Matrix Structure of LTI Systems (Finite-Length Signals)
Linear system for signals of length N can be expressed as
N
X −1
y[n] = H{x[n]} = hn,m x[m], 0≤n≤N −1
m=0
Comparing first and third equations, we see that for an LTI system
hn,m = h(n+q)N ,(m+q)N ∀q ∈Z
C-37
LTI Systems are Circulant Matrices (Finite-Length Signals) (1)
For an LTI system with length-N signals
··· ···
h0,0 h0,1 h0,2 h0,N −1 h0,0 hN −1,0 hN −2,0 h1,0
h1,0 h1,1 h1,2 ··· h1,N −1
h1,0 h0,0 hN −1,0 ··· h2,0
H =
h2,0 h2,1 h2,2 ··· h2,N −1 =
h2,0 h1,0 h0,0 ··· h3,0
.. .. .. .. .. .. .. ..
. . . . . . . .
hN −1,0 hN −1,1 hN −1,2 ··· hN −1,N −1 hN −1,0 hN −2,0 hN −3,0 ··· h0,0
Entries on the matrix diagonals are the same + circular wraparound – circulant matrix
C-38
LTI Systems are Circulant Matrices (Finite-Length Signals) (2)
All of the entries in a circulant matrix can be expressed in terms of the entries of the
C-39
LTI Systems are Circulant Matrices (Finite-Length Signals) (3)
All of the entries in a circulant matrix can be expressed in terms of the entries of the
• 0-th column: h[n] = hn,0 (this is a signal/column vector; call it h)
• Circularly time-reversed 0-th row: h[m] = h0,−m
[H]n,m = hn,m
= h[(n − m)N ]
H = h =
Note the diagonals
and circulant shifts!
C-40
Summary
Fundamental signal processing system (and our focus for the rest of the course)
C-41
Convolution
Set D
Impulse Response
Recall: LTI Systems are Toeplitz Matrices (Infinite-Length Signals)
x H y
[H]n,m = hn,m
= h[n − m]
Columns/rows of H are h= H=
shifted versions of the
0-th column/row
D-2
Impulse Response (Infinite-Length Signals)
The 0-th column of the matrix H – the column vector h – has a special interpretation
(
1 n=0
Compute the output when the input is a delta function (impulse): δ[n] =
0 otherwise
H δ h
= =
D-3
Impulse Response from Formulas (Infinite-Length Signals)
General formula for LTI matrix multiplication
∞
X
y[n] = h[n − m] x[m]
m=−∞
δ H h
x h y
D-4
Example: Impulse Response of the Scaling System
x H y
h[n] = 2 δ[n]
Consider system for infinite-length signals; 2
0
-8 -6 -4 -2 0 2 4 6 8
n
Scaling system: y[n] = H{x[n]} = 2 x[n]
D-5
Example: Impulse Response of the Shift System
x H y
h[n] = δ[n − 2]
Consider system for infinite-length signals; 1
0
-8 -6 -4 -2 0 2 4 6 8
n
Shift system: y[n] = H{x[n]} = x[n − 2]
D-6
Example: Impulse Response of the Moving Average System
x H y
1
h[n] = 2 (δ[n] + δ[n − 1])
Consider system for infinite-length signals; 0.5
1
Impulse response: h[n] = H{δ[n]} = 2 (δ[n] + δ[n − 1])
D-7
Example: Impulse Response of the Recursive Average System
x H y
0
-8 -6 -4 -2 0 2 4 6 8
n
Recursive average system: y[n] = H{x[n]} = x[n] + α y[n − 1]
D-8
Recall: LTI Systems are Circulant Matrices (Finite-Length Signals)
x H y
[H]n,m = hn,m
= h[(n − m)N ]
Columns/rows of H are h= H=
circularly shifted versions
of the 0-th column/row
D-9
Impulse Response (Finite-Length Signals)
The 0-th column of the matrix H – the column vector h – has a special interpretation
(
1 n=0
Compute the output when the input is a delta function (impulse): δ[n] =
0 otherwise
H δ h
= =
D-10
Impulse Response from Formulas (Finite-Length Signals)
General formula for LTI matrix multiplication
N
X −1
y[n] = h[(n − m)N ] x[m]
m=0
δ H h
x h y
D-11
Summary
LTI system = multiplication by infinite-sized Toeplitz or N × N circulant matrix H: y = Hx
The impulse response h of an LTI system = the response to an impulse δ
• The impulse response is the 0-th column of the matrix H
• The impulse response characterizes an LTI system
x h y
Formula for the output signal y in terms of the input signal x and the impulse response h
• Infinite-length signals
∞
X
y[n] = h[n − m] x[m], −∞ < n < ∞
m=−∞
• Length-N signals
N
X −1
y[n] = h[(n − m)N ] x[m], 0≤n≤N −1
m=0
D-12
Convolution, Part 1
(Infinite-Length Signals)
Three Ways to Compute the Output of an LTI System Given the Input
x H y
1 If H is defined in terms of a formula or algorithm, apply the input x and compute y[n] at each
time point n ∈ Z
• This is how systems are usually applied in computer code and hardware
2 Find the impulse response h (by inputting x[n] = δ[n]), form the Toeplitz system matrix H,
and multiply by the (infinite-length) input signal vector x to obtain y = H x
• This is not usually practical but is useful for conceptual purposes
3 Find the impulse response h and apply the formula for matrix/vector product for each n ∈ Z
∞
X
y[n] = h[n − m] x[m] = x[n] ∗ h[n]
m=−∞
• This is called convolution and is both conceptually and practically useful (Matlab command: conv)
D-14
Convolution as a Sequence of Inner Products (1)
x h y
Convolution formula
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m] y H x
m=−∞
Convolution formula
xT
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
=
To compute the entry y[n] in the output vector y:
D-16
A Seven-Step Program for Computing Convolution By Hand
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Step 1: Decide which of x or h you will flip and shift; you have a choice since x ∗ h = h ∗ x
Step 4: To compute y at the time point n, plot the time-reversed impulse response after it has
been shifted to the right (delayed) by n time units: h[−(m − n)] = h[n − m]
Step 5: y[n] = the inner product between the signals x[m] and h[n − m]
(Note: for complex signals, do not complex conjugate the second signal in the inner product)
Step 7: Plot y[n] and perform a reality check to make sure your answer seems reasonable
D-17
First Convolution Example (1)
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
x[n]
1
0.5
0
-6 -4 -2 0 2 4 6
n
D-18
First Convolution Example (2)
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
x[m] h[−m]
1 1
0.5 0.5
0 0
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
m m
D-19
Convolution, Part 2
(Infinite-Length Signals)
A Seven-Step Program for Computing Convolution By Hand
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Step 1: Decide which of x or h you will flip and shift; you have a choice since x ∗ h = h ∗ x
Step 4: To compute y at the time point n, plot the time-reversed impulse response after it has
been shifted to the right (delayed) by n time units: h[−(m − n)] = h[n − m]
Step 5: y[n] = the inner product between the signals x[m] and h[n − m]
(Note: for complex signals, do not complex conjugate the second signal in the inner product)
Step 7: Plot y[n] and perform a reality check to make sure your answer seems reasonable
D-21
Second Convolution Example (1)
Recall the recursive average system
1
y[n] = x[n] + y[n − 1]
2
1 n
and its impulse response h[n] = 2 u[n]
h[n]
1
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
Compute the output y when the input is a unit step x[n] = u[n]
x[n]
1
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
D-22
Second Convolution Example (2)
∞
X
y[n] = h[n] ∗ x[n] = h[m] x[n − m]
m=−∞
h[m] x[−m]
1 1
0.5 0.5
0 0
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
m m
Recall the super useful formula for the finite geometric series
N2
X aN1 − aN2 +1
ak = , N1 ≤ N2
1−a
k=N1
D-23
Second Convolution Example (3)
∞
X
y[n] = h[n] ∗ x[n] = h[m] x[n − m]
m=−∞
h[m] x[−m]
1 1
0.5 0.5
0 0
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
m m
D-24
Summary
Convolution formula for the output y of an LTI system given the input x and the impulse
response h (infinite-length signals)
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Convolution is a sequence of inner products between the signal and the shifted, time-reversed
impulse response
Check your work and compute large convolutions using Matlab command conv
D-25
Circular Convolution
(Finite-Length Signals)
Circular Convolution as a Sequence of Inner Products (1)
x h y
Convolution formula
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0 y H x
=
To compute the entry y[n] in the output vector y:
Step 1: Decide which of x or h you will flip and shift; you have a choice since x ∗ h = h ∗ x
Step 3: Plot the circularly time-reversed impulse response h[(−m)N ] on a clock with N time
locations
Step 4: To compute y at the time point n, plot the time-reversed impulse response after it has
been shifted counter-clockwise (delayed) by n time units: h[(−(m − n))N ] = h[(n − m)N ]
Step 5: y[n] = the inner product between the signals x[m] and h[(n − m)N ]
(Note: for complex signals, do not complex conjugate the second signal in the inner product)
Step 7: Plot y[n] and perform a reality check to make sure your answer seems reasonable
D-29
Circular Convolution Example
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
0
0 1 2 3 4 5 6 7
n
D-30
Circular Convolution Example
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
D-31
Circular Convolution Example
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
y[n]
8
6
4
2
0
-2
-4
0 1 2 3 4 5 6 7
n
D-32
Summary
Circular convolution formula for the output y of an LTI system given the input x and the
impulse response h (length-N signals)
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
Circular convolution is a sequence of inner products between the signal and the circularly shifted,
time-reversed impulse response
Check your work and compute large circular convolutions using Matlab command cconv
D-33
Properties of Convolution
Properties of Convolution
x h y
Input signal x, LTI system impulse response h, and output signal y are related by the convolution
• Infinite-length signals
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m], −∞ < n < ∞
m=−∞
• Length-N signals
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m], 0≤n≤N −1
m=0
Thanks to the Toeplitz/circulant structure of LTI systems, convolution has very special properties
We will emphasize infinite-length convolution, but similar arguments hold for circular convolution
except where noted
D-35
Convolution is Commutative
Fact: Convolution is commutative: x∗h = h∗x
Enables us to pick either h or x to flip and shift (or stack into a matrix) when convolving
D-36
Cascade Connection of LTI Systems
Impulse response of the cascade (aka series connection) of two LTI systems: y = H1 H2 x
x h1 h2 y
x h1 ∗ h2 y
Easy proof by picture; find impulse response the old school way
δ h1 h1 h2 h1 ∗ h2
D-37
Parallel Connection of LTI Systems
h1
+ ≡ h1 + h2
h2
D-38
Example: Impulse Response of a Complicated Connection of LTI Systems
Compute the overall effective impulse response of the following system
x h y
D-39
Causal Systems
DEFINITION
A system H is causal if the output y[n] at time n depends only the input x[m] for
times m ≤ n. In words, causal systems do not look into the future
Fact: An LTI system is causal if its impulse response is causal: h[n] = 0 for n < 0
h[n] = αn u[n], α = 0.8
1
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
does not look into the future if h[n − m] = 0 when m > n; equivalently, h[n0 ] = 0 when n0 < 0
D-40
Causal System Matrix
Fact: An LTI system is causal if its impulse response is causal: h[n] = 0 for n < 0
h[n] = αn u[n], α = 0.8
1
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
D-41
Duration of Convolution
DEFINITION
The signal x has support interval [N1 , N2 ], N1 ≤ N2 , if x[n] = 0 for all n < N1
and n > N2 . The duration Dx of x equals N2 − N1 + 1
Fact: If x has duration Dx samples and h has duration Dh samples, then the convolution
y = x ∗ h has duration at most Dx + Dh − 1 samples (proof by picture is simple)
D-42
Duration of Impulse Response – FIR
DEFINITION
An LTI system has a finite impulse response (FIR) if the duration of its impulse
response h is finite
1
Example: Moving average system y[n] = H{x[n]} = 2 (x[n] + x[n − 1])
1
h[n] = 2 (δ[n] + δ[n − 1])
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
D-43
Duration of Impulse Response – IIR
DEFINITION
An LTI system has an infinite impulse response (IIR) if the duration of its impulse
response h is infinite
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
D-44
Duration of Convolution
Recall that, if x has duration Dx samples and h has duration Dh samples, then the infinite-length
convolution y = x ∗ h has duration at most Dx + Dh − 1 samples (proof by picture is simple)
x[n], Dx = 24
2
0
-5 0 5 10 15 20 25 30 35 40 45 50
n
h[n], Dh = 14
1
-1
-5 0 5 10 15 20 25 30 35 40 45 50
n
y[n], Dh = 37
10
0
-10
-5 0 5 10 15 20 25 30 35 40 45 50
n
D-45
Implementing Infinite-Length Convolution with Circular Convolution
Consider two infinite-length signals: x has duration Dx samples and h has duration Dh samples,
Dx , Dh < ∞
Armed with this fact, we can implement infinite-length convolution using circular convolution
1 Extract the Dx -sample support interval of x and zero pad so that the resulting signal x0 is of
length Dx + Dh − 1
2 Perform the same operations on h to obtain h0
3 Circularly convolve x0 ~ h0 to obtain y 0
Fact: The values of the signal y 0 will coincide with those of the infinite-length convolution
y = x ∗ h within its support interval
How does it work? The zero padding effectively converts circular shifts (finite-length signals) into
regular shifts (infinite-length signals) (Easy to try out in Matlab!)
D-46
Summary
Convolution has very special and beautiful properties
Convolution is commutative
Can implement infinite-length convolution using circular convolution when the signals have finite
duration (important later for “fast convolution” using the FFT)
D-47
Stable Systems
Stable Systems (1)
“well-behaved” x h “well-behaved” y
Stability is essential to ensuring the proper and safe operation of myriad systems
• Steering systems
• Braking systems
• Robotic navigation
• Modern aircraft
• International Space Station
• Internet IP packet communication (TCP) . . .
D-49
Stable Systems (2)
With a stable system, a “well-behaved” input always produces a “well-behaved” output
“well-behaved” x h “well-behaved” y
Example: Recall the recursive average system y[n] = H{x[n]} = x[n] + α y[n − 1]
Consider a step function input x[n] = u[n]
x[n] = u[n]
1
0.5
0
-4 -2 0 2 4 6 8
n
1
y[n], with α = 2 y[n], with α = 3
2
2
100
1
50
0 0
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
n n
D-50
Well-Behaved Signals
“well-behaved” x h “well-behaved” y
How to measure how “well-behaved” a signal is? Different measures give different notions of
stability
One reasonable measure: A signal x is well behaved if it is bounded (recall that sup is like max)
D-51
Bounded-Input Bounded-Output (BIBO) Stability
D-52
BIBO Stability (1)
bounded x h bounded y
Bounded input and output means kxk∞ < ∞ and kyk∞ < ∞,
or that there exist constants A, C < ∞ such that |x[n]| < A and |y[n]| < C for all n
x[n] y[n]
1
2
0
h 0
-2
-1
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
D-53
BIBO Stability (2)
bounded x h bounded y
Bounded input and output means kxk∞ < ∞ and kyk∞ < ∞
x[n] y[n]
1
2
0
h 0
-2
-1
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
Fact: An LTI system with impulse response h is BIBO stable if and only if
∞
X
khk1 = |h[n]| < ∞
n=−∞
D-54
BIBO Stability – Sufficient Condition
Prove that if khk1 < ∞ then the system is BIBO stable – for any input kxk∞ < ∞ the output
kyk∞ < ∞
Recall that kxk∞ < ∞ means there exist a constant A such that |x[n]| < A < ∞ for all n
P∞
Let khk1 = n=−∞ |h[n]| = B < ∞
Compute a bound on |y[n]| using the convolution of x and h and the bounds A and B
∞
X ∞
X
|y[n]| = h[n − m] x[m] ≤ |h[n − m]| |x[m]|
m=−∞ m=−∞
X∞ ∞
X
< |h[n − m]| A = A |h[k]| = A B = C < ∞
m=−∞ k=−∞
Given an impulse response h with khk1 = ∞, form the tricky special signal x[n] = sgn(h[−n])
• x[n] is the ± sign of the time-reversed impulse response h[−n]
• Note that x is bounded: |x[n]| ≤ 1 for all n
h[n] h[−n]
5 5
0 0
-5 -5
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
x[n]
1
-1
-15 -10 -5 0 5 10 15
n
D-56
BIBO Stability – Necessary Condition (2)
We are proving that that if khk1 = ∞ then the system is not BIBO stable – there exists an input
kxk∞ < ∞ such that the output kyk∞ = ∞
Armed with the tricky special signal x, compute the output y[n] at the time point n = 0
∞
X ∞
X
y[0] = h[0 − m] x[m] = h[−m] sgn(h[−m])
m=−∞ m=−∞
∞
X ∞
X
= |h[−m]| = |h[k]| = ∞
m=−∞ k=−∞
So, even though x was bounded, y is not bounded; so system is not BIBO stable
D-57
BIBO System Examples (1)
Absolute summability of the impulse response h determines whether an LTI systems is BIBO
stable or not
h[n]
( 1
1
n ≥ 1
Example: h[n] = n 0.5
0 otherwise 0
-5 0 5 10 15 20 25
P∞ n
1
khk1 = n=1 n = ∞ ⇒ not BIBO
h[n]
( 1
1
n2 n≥1 0.5
Example: h[n] =
0 otherwise 0
-5 0 5 10 15 20 25
n
P∞ 2
1 π
khk1 = n=1 n2 = 6 ⇒ BIBO
h[n] (FIR)
2
1
Example: h FIR ⇒ BIBO 0
-1
-5 0 5 10 15 20 25
n
D-58
BIBO System Examples (2)
Example: Recall the recursive average system y[n] = H{x[n]} = x[n] + α y[n − 1]
0
-6 -4 -2 0 2 4 6 8
n
0
-6 -4 -2 0 2 4 6 8
n
D-59
Summary
Signal processing applications typically dictate that the system be stable, meaning that
“well-behaved inputs” produce “well-behaved outputs”
BIBO stability: bounded inputs always produce bounded outputs iff the impulse response h is
such that khk1 < ∞
When a system is not BIBO stable, all hope is not lost; unstable systems can often by stabilized
using feedback (more on this later)
D-60
Orthogonal Bases and the DFT
Set E
Orthogonal Bases
Transforms and Orthogonal Bases
We now turn back to linear algebra to understand transforms, which map signals between
different “domains”
Recall that signals can be interpreted as vectors in a vector space (linear algebra)
As we will see, different signal transforms (and “domains”) correspond to different bases
E-2
Basis
DEFINITION
A basis {bk } for a vector space V is a collection of vectors from V that are linearly
independent and span V
N −1
Span: All vectors in V can be represented as a linear combination of the basis vectors {bk }k=0
N
X −1
x = αk bk = α0 b0 + α1 b1 + · · · + αN −1 bN −1 ∀x ∈ V
k=0
Linearly independent: None of the basis vectors can be represented as a linear combination of
the other basis vectors
Fact: The dimension of RN and CN equals N (we will focus on these spaces)
E-3
Basis Matrix
Stack the basis vectors bk as columns into the N × N basis matrix
B = b0 |b1 | · · · |bN −1
We can now write a linear combination of basis elements as the matrix/vector product
α0
N −1
α1
X
x = α0 b0 + α1 b1 + · · · + αN −1 bN −1 = αk bk = b0 |b1 | · · · |bN −1 . = B a
..
k=0
αN −1
E-4
Orthogonal and Orthonormal Bases
−1
An orthogonal basis {bk }N
DEFINITION
−1
An orthonormal basis {bk }N k=0 for a vector space V is a basis whose elements
are mutually orthogonal and normalized (in the 2-norm)
DEFINITION
hbk , bl i = 0, k 6= l
kbk k2 = 1
E-5
Example: Orthogonal and Orthonormal Bases in R2
b0 = , b1 =
B=
E-6
Inverse of a Matrix
AA−1 = A−1 A = I
E-7
Inverse of an Orthonormal Basis Matrix
−1
When the basis matrix B contains an orthonormal basis {bk }N
k=0 , its inverse B
−1
is trivial to
calculate
To prove, write out BH B and use the fact that the columns of B are orthonormal
H
b0 1
bH 1
1
BH B = . b0 |b1 | · · · |bN −1 =
..
..
.
bH
N −1 1
E-8
Signal Representation by Orthonormal Basis
−1
Given an orthonormal basis {bk }Nk=0 and orthonormal basis matrix B, we have the following
signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0
Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk
Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk
E-9
Summary
Orthonormal bases make life easy
−1
Given an orthonormal basis {bk }Nk=0 and orthonormal basis matrix B, we have the following
signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0
In signal processing, we say that the vector a is the transform of the signal x with respect to the
−1
orthonormal basis {bk }Nk=0
Clearly the transform a contains all of the information in the signal x (and vice versa)
E-10
Eigenanalysis
Eigenanalysis
We continue borrowing from linear algebra by recalling the eigenvectors and eigenvalues of a
matrix
Applying this point of view to circulant matrices (LTI systems for finite-length signals) will lead
to an amazing result that ties together many threads of thought
E-12
Eigenvectors and Eigenvalues
DEFINITION
Av = λv
Geometric intuition: Multiplying an eigenvector v by the matrix A does not change its direction;
it changes only its strength by the factor λ ∈ C
Example in R2 :
3 1 1
A= , v= , λ=2
1 3 −1
3 1 1 2
Av = = = 2v
1 3 −1 −2
E-13
Eigendecomposition
An N × N matrix A has N eigenvectors and N eigenvalues (not necessarily distinct, though)
−1
Stack the N eigenvectors {vm }N
m=0 as columns into an N × N matrix
V = v0 |v1 | · · · |vN −1
−1
Place the N eigenvalues {λm }N
m=0 on the diagonal of an N × N diagonal matrix
λ0
λ1
Λ=
..
.
λN −1
E-14
Diagonalization
Recall the eigendecomposition of a matrix A
AV = VΛ
When the eigenvector matrix V is invertible, we can multiply both sides of the
eigendecomposition on the left by V−1 to obtain
V−1 AV = V−1 VΛ = IΛ = Λ
V−1 AV = Λ
Much easier to multiply a vector by Λ than by A! (We simply scale each entry)
E-15
Aside: Diagonalization and Normal Matrices
Given a normal matrix A, if its eigenvector matrix V is invertible, then it is also orthogonal
Recall that the system matrix H of a finite-length LTI system is circulant, and therefore also
normal.
E-16
Summary
Multiplying an eigenvector v by the matrix A does not change its direction; it changes only its
strength by the factor λ
The eigenvectors/values contain all of the information in the matrix A (and vice versa)
Diagonalization by eigendecomposition
V−1 AV = Λ
or, equivalently,
A = VΛV−1
E-17
Eigenanalysis of
LTI Systems
(Finite-Length Signals)
LTI Systems for Finite-Length Signals
x H y
y = Hx
Eigenvectors v are input signals that emerge at the system output unchanged (except for a
scaling by the eigenvalue λ) and so are somehow “fundamental” to the system
E-19
Eigenvectors of LTI Systems
Fact: The eigenvectors of a circulant matrix (LTI system) are the complex harmonic sinusoids
2π
ej N kn
1 2π 2π
sk [n] = √ = √ cos kn + j sin kn , 0 ≤ n, k ≤ N − 1
N N N N
√
λk cos( 2π
16 kn)/ N, k = 2
√ 2
cos( 2π
16 kn)/ N, k = 2
1 1
0 0
-1 -1
0 5 10 15
n -2
0 5 10 15
n
sk H λk sk √
λk sin( 2π
16 kn)/ N, k = 2
√ 2
sin( 2π
16 kn)/ N, k = 2
1 1
0 0
-1 -1
0 5 10 15
n -2
0 5 10 15
n
E-20
Harmonic Sinusoids are Eigenvectors of LTI Systems
sk H λk sk
Prove that harmonic sinusoids are the eigenvectors of LTI systems simply by computing the
circular convolution with input sk and applying the periodicity of the harmonic sinusoids
N −1 N −1 2π
X X ej N k(n−m)N
sk [n] ~ h[n] = sk [(n − m)N ] h[m] = √ h[m]
m=0 m=0
N
N −1 2π N −1 j 2π kn
X ej N k(n−m) X e N 2π
= √ h[m] = √ e−j N km h[m]
m=0
N m=0
N
−1
N
! 2π
X 2π ej N kn
= e−j N km h[m] √ = λk sk [n]
m=0
N
E-21
Eigenvalues of LTI Systems
The eigenvalue λk ∈ C corresponding to the sinusoid eigenvector sk is called the
frequency response at frequency k since it measures how the system “responds” to sk
N −1
2π
X
λk = h[n] e−j N kn = hh, sk i = Hu [k] (unnormalized DFT)
n=0
Recall properties of the inner product: λk grows/shrinks as h and sk become√ more/less similar
λk cos( 2π
16 kn)/ N, k = 2
√ 2
cos( 2π
16 kn)/N, k = 2
1 1
0 0
-1 -1
0 5 10 15
n -2
0 5 10 15
n
sk H λk sk √
λk sin( 2π
16 kn)/ N, k = 2
√ 2
sin( 2π
16 kn)/ N, k = 2
1 1
0 0
-1 -1
0 5 10 15
n -2
0 5 10 15
n
E-22
Eigenvector Matrix of Harmonic Sinusoids
−1
Stack the N normalized harmonic sinusoid {sk }N k=0 as columns into an N × N complex
orthonormal basis matrix
S = s0 |s1 | · · · |sN −1
2π √
The row-n, column-k entries of S have a very simple structure: [S]n,k = ej N kn / N
√ √
real part: cos( 2π
N kn)/ N imaginary part: sin( 2π
N kn)/ N
Example: Eigenvector
matrix for N = 16
The eigenvalues are the frequency response (unnormalized DFT of the impulse response)
N −1
2π
X
λk = h[n] e−j N kn = hh, sk i = Hu [k] (unnormalized DFT)
n=0
−1
Place the N eigenvalues {λk }N
k=0 on the diagonal of an N × N matrix
λ0 Hu [0]
λ1 Hu [1]
Λ= =
.. ..
. .
λN −1 Hu [N − 1]
E-24
Eigendecomposition and Diagonalization of an LTI System
Given the
• circulant LTI system matrix H
• Fixed matrix of harmonic sinusoid eigenvectors S (corresponds to DFT/IDFT)
• Diagonal matrix of eigenvalues Λ (frequency response, changes with H)
we can write
H = SΛSH
y H x S Λ SH x
LTI system IDFT Freq. response DFT
= =
E-25
Summary
Harmonic sinusoids are the eigenfunctions of LTI systems for finite-length signals
(circulant matrices)
Therefore, the discrete Fourier transform (DFT) is the natural tool for studying LTI systems for
finite-length signals
Frequency response H[k] equals the unnormalized DFT of the impulse response h[n]
H = SΛSH
E-26
Discrete Fourier Transform
(DFT)
Discrete Fourier Transform
Another cornerstone of this course in particular and signal processing in general
Jean Baptiste Joseph Fourier (21 March 1768 – 16 May 1830) had the radical idea of proposing
that “all” signals could be represented as a linear combination of sinusoids
E-28
Recall: Signal Representation by Orthonormal Basis
−1
Given an orthonormal basis {bk }Nk=0 for C
N
and orthonormal basis matrix B, we have the
following signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0
a = BH x or αk = hx, bk i (analysis)
Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk
Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk
E-29
Harmonic Sinusoids are an Orthonormal Basis
Recall the length-N normalized complex harmonic sinusoids (normalized!)
2π
ej N kn
1 2π 2π
sk [n] = √ = √ cos kn + j sin kn , 0 ≤ n, k ≤ N − 1
N N N N
√ √
cos( 2π
16 kn)/ N, k = 2 sin( 2π
16 kn)/ N, k = 2
1 1
0 0
-1 -1
0 5 10 15 0 5 10 15
n n
hsk , sl i = 0, k 6= l, ksk k2 = 1
Example: Eigenvector
matrix for N = 16
E-32
Signal Representation by Harmonic Sinusoids
N −1
Given the normalized complex harmonic sinusoids {sk }k=0 and the orthonormal basis matrix S,
we define the (normalized) discrete Fourier transform (DFT) for any signal x ∈ CN
X = SH x
N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N
x = SX
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
E-33
Interpretation: Signal Representation by Harmonic Sinusoids
Analysis (Forward DFT)
• Choose the DFT coefficients X[k] such that the synthesis produces the signal x
• The weight X[k] measures the similarity between x and the harmonic sinusoid sk
• Therefore, X[k] measures the “frequency content” of x at frequency k
N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
E-34
Example: Signal Representation by Harmonic Sinusoids
Analysis (Forward DFT)
• Choose the DFT coefficients X[k] such that the synthesis produces the signal x
• X[k] measures the similarity between x and the harmonic sinusoid sk
• Therefore, X[k] measures the “frequency content” of x at frequency k
• Even if the signal x is real-valued, the DFT coefficients X will be complex, in general
N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N
x[n] |X[k]|
1
5
0
-1 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
n k
time domain frequency domain
E-35
The Unnormalized DFT
Normalized forward and inverse DFT
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
Unnormalized forward and inverse DFT is more popular in practice (we will use both)
N −1
2π
X
Xu [k] = x[n] e−j N kn
n=0
N −1
1 X 2π
x[n] = Xu [k] ej N kn
N
k=0
E-36
Aside: Physical Implementation of Fourier Analysis and Synthesis
Before computers, Fourier analysis had to calculated done by hand
Albert Michelson invented a machine that could physically perform such calculations
E-37
Aside: Physical Implementation of Fourier Analysis and Synthesis
For synthesis, the a) crank turns a series of b) gears that spin at different rates, moving c) arms
up and down in sinusoidal fashion
d) Levers weight the various sinusoidal movements, which are summed with e) springs and a bar
f) The result is plotted
E-38
Aside: Physical Implementation of Fourier Analysis and Synthesis
For analysis, the d) levers are oriented into the shape of a single period of a signal
Turning the crank amounts to taking the inner product of this shape with sinusoids (each turn
makes a higher frequency sinusoid across the c) bars)
The result is that the Fourier transform is then f) plotted
To see videos of synthesis and analysis at work, see http://www.engineerguy.com/fourier/
E-39
Summary
The discrete Fourier transform (DFT) is an orthonormal basis transformation based on the
harmonic sinusoids 2π
ej N kn
sk [n] = √
N
The DFT maps signals from the “time domain” (x[n]) to the “frequency domain” (X[k])
The DFT coefficient X[k] measures the similarity between the time signal x and the harmonic
sinusoid sk with frequency k
The set of DFT coefficients X contains all of the information in the signal x (and vice versa)
Do not confuse the normalized and unnormalized DFTs! The normalized DFT is more elegant,
but the unnormalized DFT is much more popular in practice
E-40
Discrete Fourier Transform
Examples
Discrete Fourier Transform
Useful Matlab commands: fft, fftshift, semilogy. Click here to view a video
demonstration.
Click here to see other explanations and graphical representations of the DFT.
E-42
Discrete Fourier Transform
Properties
Properties of the DFT
Normalized forward and inverse DFT
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
Unnormalized forward and inverse DFT is more popular in practice (we will use both)
N −1
2π
X
Xu [k] = x[n] e−j N kn
n=0
N −1
1 X 2π
x[n] = Xu [k] ej N kn
N
k=0
E-44
DFT Pairs
E-45
The DFT is Periodic
The DFT is of finite length N , but it can also be interpreted as periodic with period N
Proof
N −1 2π N −1 2π
X e−j N (k+lN )n X e−j N kn −j 2π lN n
X[k + lN ] = x[n] √ = x[n] √ e N = X[k] X
n=0
N n=0
N
DFT of length N = 16
|X[k]|
5
0
-15 -10 -5 0 5 10 15 20 25 30
k
E-46
DFT Frequencies N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N
X[k] measures the similarity between the time signal x and the harmonic sinusoid sk
|X[k]|
5
0
0 5 10 15
k
E-47
DFT Frequencies and Periodicity
Periodicity of DFT means we can treat frequencies mod N
2π
X[k] measures the “frequency content” of x at frequency ωk = N (k)N
|X[k]|
5
0
-15 -10 -5 0 5 10 15 20 25 30
k
E-48
DFT Frequency Ranges
Periodicity of DFT means every length-N interval of k carries the same information
0
0 5 10 15
k
Typical interval 2: − N2 ≤ k ≤ N
2 − 1 corresponds to frequencies ωk in the interval −π ≤ ω < π
|X[k]|
5
0
-8 -6 -4 -2 0 2 4 6
k
E-49
The Inverse DFT is Periodic
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
The time signal produced by the inverse DFT (synthesis) is periodic with period N
Proof
N −1 2π N −1 2π
X ej N k(n+mN ) X ej N kn j 2π kmN
x[n + mN ] = X[k] √ = X[k], √ e N = x[n] X
n=0
N n=0
N
This should not be surprising, since the harmonic sinusoids are periodic with period N
E-50
DFT and Circular Shift
2π
= e−j N km X[k] X
E-51
DFT and Modulation
Proof:
N −1 2π N −1 2π
X 2π e−j N kn X e−j N (k−l)n
x[n] ej N ln √ = x[n] √ = X[(k − l)N ] X
n=0
N n=0
N
E-52
DFT and Circular Convolution
x h y
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
If
DFT DFT DFT
x[n] ←→ Xu [k], h[n] ←→ Hu [k], y[n] ←→ Yu [k]
then
Yu [k] = Hu [k] Xu [k]
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
Proof
−1 −1 −1
N N N
!
2π 2π
X X X
Yu [k] = y[n] e−j N kn = h[(n − m)N ] x[m] e−j N kn
n=0 n=0 m=0
−1 −1 −1 −1
N N
! N N
!
−j 2π −j 2π
X X X X
= x[m] h[(n − m)N ]e N kn = x[m] h[r]e N k(r+m)
−1 −1
N
! N
!
−j 2π −j 2π
X X
= x[m]e N km h[r]e N kr = Xu [k] Hu [k] X
m=0 r=0
E-54
The DFT is Linear
E-55
The DFT is Complex Valued
Even if the signal x[n] is real-valued, the DFT is complex-valued, in general
x[n]
1
-1
0 5 10 15 20 25 30
n
Re(X[k]) |X[k]|
5 5
0
-5 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
Im(X[k]) \X[k]
4
2 2
0 0
-2 -2
-4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
E-56
DFT Symmetry Properties (1)
2π
The harmonic sinusoid basis elements sk [n] = ej N kn of the DFT have symmetry properties:
2π 2π
Re ej N kn = cos kn (even function)
N
2π 2π
Im ej N kn = sin kn (odd function)
N
Even signal/DFT
x[n] = x[(−n)N ], X[k] = X[(−k)N ]
Odd signal/DFT
x[n] = −x[(−n)N ], X[k] = −X[(−k)N ]
E-57
DFT Symmetry Properties (2)
x[n] X[k] Re(X[k]) Im(X[k]) |X[k]| ∠X[k]
E-58
DFT Symmetry Properties (3)
Simply compute X[k]∗ and use the fact that x[n] is real
N −1 2π
!∗ N −1 2π N −1 2π
∗
X e−j N kn X e+j N kn X e−j N (−k)n
X[k] = x[n] √ = x[n]∗ √ = x[n] √ = X[−k] X
n=0
N n=0
N n=0
N
Easy to continue on to prove that Re(X[−k]) = Re(X[k]) (that is, the real part of X[k] is even)
by taking the real part of both sides of the equation X[−k] = X[k]∗
E-59
DFT Symmetry Properties (4)
Example: Real-valued signal x[n]
x[n]
1
-1
0 5 10 15 20 25 30
n
Re(X[k]) |X[k]|
5 5
0
-5 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
Im(X[k]) \X[k]
4
2 2
0 0
-2 -2
-4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
E-60
DFT Symmetry Properties (5)
Example: Real-valued signal x[n], but plotting X[k] using Matlab fftshift command
x[n]
1
-1
0 5 10 15 20 25 30
n
Re(X[k]) |X[k]|
5 5
0
-5 0
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
k k
Im(X[k]) \X[k]
4
2 2
0 0
-2 -2
-4
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
k k
E-61
Duality of the DFT
Note that the inverse and forward DFT formulas are identical except for conjugation of the
harmonic sinusoids
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N
Thus, any DFT property that is true for x[n] is also true for X[−k]
E-62
Summary
E-63
Fast Fourier Transform
(FFT)
Cost to Compute the Discrete Fourier Transform
Recall the (unnormalized) DFT of the time signal x[n]
N −1
2π
X
Xu [k] = x[n] e−j N kn , 0≤k ≤N −1
n=0
k = 0, 1, . . . , N − 1 ⇒ N (N − 1) ≈ N 2 total adds
E-65
Fast Fourier Transform
O(N 2 ) computational complexity is too high for many important applications; it is not
uncommon to have N = 107 or more
Important step forward in 1965: Cooley and Tukey “discovered” the fast Fourier transform
(FFT), which lowers the computational complexity to O(N log N )
It turns out that Gauss invented the FFT in 1805 (Heideman, Johnson, Burrus, 1984)
E-66
Fast Fourier Transform
There are many different kinds of FFTs; here we will study the simplest:
the radix-2, decimation-in-time FFT
Clearly we can use the same methods to speed up both the forward and inverse DFT (by duality);
we will work with the forward DFT (and drop the subscript u for unnormalized)
N −1
2π
X
X[k] = x[n] e−j N kn
n=0
2π
To keep the notation clean, define the twiddle factor: WN = e−j N ∈ C
N
X −1
X[k] = x[n] WNkn
n=0
E-67
Twiddle Factors are Periodic
2π
Note that the twiddle factors WN = e−j N are periodic in n and k
k(n+N ) (k+N )n
WNkn = WN = WN
Proof
2π 2π 2π
e−j N kn = e−j N k(n+N ) = e−j N (k+N )n
E-68
FFT – Step 1
In the radix-2, decimation-in-time FFT, the signal length N is a power of 2
The FFT is a divide and conquer algorithm: We will split the length-N into two length-N/2
FFTs and then iterate; each split will save on computations
We will work out the specific example of an N = 8 DFT, but the ideas extend to any
power-of-two length
N −1 N/2−1 N/2−1
k(2n) k(2n+1)
X X X
X[k] = x[n] WNkn = x[2n] WN + x[2n + 1] WN
n=0 n=0 n=0
E-69
FFT – Step 2
Step 2: Reorganize the two sums into two length-N/2 DFTs
N −1 N/2−1 N/2−1
k(2n) k(2n+1)
X X X
X[k] = x[n] WNkn = x[2n] WN + x[2n + 1] WN
n=0 n=0 n=0
N/2−1 N/2−1
X X
= x[2n] WN2kn + WNk x[2n + 1] WN2kn
n=0 n=0
2π
Note that WN2kn = e−j N 2kn = e−j N/2 kn = WN/2
2π
kn
and so we have . . .
PN/2−1 kn
Term 1 = E[k] = n=0 x[2n] WN/2 = N/2-point DFT of the even samples of x[n]
PN/2−1
Term 2 = WNk O[k] = WNk n=0
kn
x[2n + 1] WN/2 = N/2-point DFT of the odd samples of x[n]
E-70
FFT – Step 3
Step 3: Not so fast! We need to evaluate
N/2−1 N/2−1
X X
kn
X[k] = x[2n] WN/2 + WNk kn
x[2n + 1] WN/2
n=0 n=0
= E[k] + WNk O[k]
Periodicity of the twiddle factors implies that E[k] and O[k] are also periodic with period N/2
N/2−1 N/2−1
(k+N/2)n
X X
kn
E[k + N/2] = x[2n] WN/2 = x[2n] WN/2 = E[k]
n=0 n=0
and similarly
O[k] = O[k + N/2]
E-71
FFT – The Result
X[k] = E[k] + WNk O[k], k = 0, 1, . . . , N − 1
E-72
FFT – Iterate
Divide and Conquer! Break the two length-N/2 DFTs into four length-N/4 DFTs
E-73
FFT – Divided and Conquered
E-74
FFT of Length 8
E-75
Computational Savings
FFT: Multiplies and adds required
2 × 8 = 16 multiplies
3 × 8 = 24 adds
82 = 64 multiplies
82 − 8 = 56 adds
N log2 N N 2
E-76
Summary
The FFT has been called the “most important computational algorithm of our generation”
The field of digital signal processing exploded after its introduction (1965)
Why it works:
• Symmetry and periodicity of sinusoids
• Divide and conquer
There are are many different kinds of FFTs for different lengths and different situations
E-77
Fast Convolution
Cost to Compute a Circular Convolution
Recall the circular convolution of two length-N time signals x[n] and h[n]
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m], 0≤n<N −1
m=0
Number of Multiplies: Must multiply h[(n − m)N ] x[n] for each value of
Number of Additions: Must sum the N products h[(n − m)N ] x[n] for each value of
n = 0, 1, . . . , N − 1 ⇒ N (N − 1) ≈ N 2 total adds
E-79
Circular Convolution via the FFT
We can reduce the computational cost substantially if we move to the frequency domain using
the DFT as computed by the FFT
E-80
Extension to Fast Convolution of Infinite-Length, Finite-Duration Signals
Applications tend to use (at least implicitly) infinite-length convolution more often than circular
convolution
Fortunately there is a clever way to trick a circular convolution into performing an infinite-length
convolution
Basic idea: zero pad the signals such that any circular wrap-around effects are zeroed out by the
zero padding
E-81
Duration of Convolution
Recall that, if x has duration Dx samples and h has duration Dh samples, then the infinite-length
convolution y = x ∗ h has duration at most Dx + Dh − 1 samples (proof by picture is simple)
x[n], Dx = 24
2
0
-5 0 5 10 15 20 25 30 35 40 45 50
n
h[n], Dh = 14
1
-1
-5 0 5 10 15 20 25 30 35 40 45 50
n
y[n], Dh = 37
10
0
-10
-5 0 5 10 15 20 25 30 35 40 45 50
n
E-82
Extension to Fast Convolution of Infinite-Length, Finite-Duration Signals
If x has duration Dx samples and h has duration Dh samples, then the infinite-length
convolution y = x ∗ h has duration at most Dx + Dh − 1 samples
If we zero pad both x and h to length Dx + Dh − 1 and compute their circular convolution
y 0 = xzp ~ hzp . . .
Then the nonzero entries of the circular convolution y 0 will agree with those of the infinite-length
convolution y
E-83
Summary
E-84
More Orthogonal Bases
Recall: Signal Representation by Orthonormal Basis
−1
Given an orthonormal basis {bk }Nk=0 for C
N
and orthonormal basis matrix B, we have the
following signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0
a = BH x or αk = hx, bk i (analysis)
Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk
Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk
E-86
More Orthonormal Bases
The DFT is the right transform to study LTI systems; the frequency domain arises naturally
Challenge 1: A signal x ∈ RN has N complex DFT coefficients (real and imaginary parts of
each DFT coefficient)
• This is a problem in compression applications, where we would like approximate a smooth signal x
by just a few DFT coefficients (2× redundancy)
Challenge 2: Some signals are best represented neither in the time domain nor the frequency
domain
• For example, in a domain in between time and frequency (“time-frequency”, like a musical score)
Due to these and other challenges, there has been significant interest in developing additional
orthonormal basis transformations beyond the DFT
E-87
Discrete Cosine Transform (DCT)
A DFT-like transform but using real-valued basis functions (dct in Matlab)
• There are actually several different DCTs; here we present just one of them (the “DCT-II”)
• DCT is the transform inside JPEG image compression and MPEG video compression
Example: N = 32, k = 3
d3 [n]
1
-1
0 5 10 15 20 25 30
n
Note: Not periodizable like the (complex) harmonic sinusoids of the DFT basis
E-88
DCT Orthogonal Basis Matrix
DCT basis matrix compared to the real/imaginary parts of the DFT basis matrix (N = 16)
E-89
DCT vs. DFT for Compression
DFT (real and imaginary parts) and DCT of a test signal
0.5 10
0
0
-10
-0.5
-20
-1
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
n n
0 0
-10
-1
-20
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
n n E-90
Between Time and Frequency
Some signals are best represented neither in the time domain nor the frequency domain
For example, many transient signals (audio, speech, images, etc.) are best represented in a
domain in between time and frequency (“time-frequency”, like a musical score)
E-91
Haar Wavelet Transform
Haar wavelet transform (1910): Key departure from DFT and DCT
• Basis functions are local (short duration, “local waves”)
• Basis functions are mulitscale (many different durations) edge detectors (derivatives)
fine scale wavelet
1
-1
0 5 10 15
n
mid scale wavelet
1
-1
0 5 10 15
n
coarse scale wavelet
1
-1
0 5 10 15
n
E-92
Haar Wavelet Transform Basis Matrix
Wavelets are inside JPEG2000 image compression and many image analysis/processing systems
Haar wavelet basis matrix compared to the real/imaginary parts of the DFT basis matrix
(N = 16)
E-93
Short-Time Fourier Transform (1)
STFT analyzes how a signal’s frequency content changes over time – local Fourier analysis
E-94
Short-Time Fourier Transform (2)
STFT analyzes how a signal’s frequency content changes over time – local Fourier analysis
E-95
Short-Time Fourier Transform (3)
|STFT|2 is called the spectrogram (spectrogram in Matlab)
The STFT can be configured to be an orthonormal basis, but this is generally not done in practice
E-96
Summary
But other orthogonal bases play important roles in diverse applications, especially signal analysis
and compression
E-97
The Discrete-Time Fourier
Transform
Set F
Discrete Time Fourier Transform
(DTFT)
Discrete Time Fourier Transform (DTFT)
The DTFT is the Fourier transform of choice for analyzing infinite-length signals and systems
Useful for conceptual, pencil-and-paper work, but not Matlab friendly (infinitely long vectors)
Properties are very similar to the Discrete Fourier Transform (DFT) with a few caveats
We will derive the DTFT as the limit of the DFT as the signal length N → ∞
F-2
Recall: DFT (Unnormalized)
Analysis (Forward DFT)
• Choose the DFT coefficients X[k] such that the synthesis produces the signal x
• The weight X[k] measures the similarity between x and the harmonic sinusoid sk
• Therefore, X[k] measures the “frequency content” of x at frequency k
N −1
2π
X
Xu [k] = x[n] e−j N kn
n=0
N −1
1 X 2π
x[n] = Xu [k] ej N kn
N
k=0
F-3
The Centered DFT
Both x[n] and X[k] are periodic with period N , so we can shift the intervals of interest in time
and frequency to be centered around n, k = 0
N N
− ≤ n, k ≤ −1
2 2
N/2−1
1 X 2π N N
x[n] = Xu [k] ej N kn − ≤ n ≤ −1
N 2 2
k=−N/2
F-4
Frequencies of the Centered DFT
N/2−1
X 2π N N
Xu [k] = x[n] e−j N kn , − ≤ k ≤ −1
2 2
n=−N/2
Xu [k] measures the similarity between the time signal x and the harmonic sinusoid sk
0
-15 -10 -5 0 5 10 15
k
F-5
Take It To The Limit (1)
N/2−1
X 2π N N
Xu [k] = x[n] e−j N kn , − ≤ k ≤ −1
2 2
n=−N/2
Let the signal length N increase towards ∞ and study what happens to Xu [k]
Key fact: No matter how large N grows, the frequencies of the DFT sinusoids remain in the
interval
2π
−π ≤ ωk = k < π
N
|X[k]|
5
0
-15 -10 -5 0 5 10 15
k
F-6
Take It To The Limit (2) N/2−1
2π
X
Xu [k] = x[n] e−j N kn
n=−N/2
N time signal x[n] DFT X[k]
1
5
0
32 -1 0
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n k
1
5
0
64 0
-1
-30 -20 -10 0 10 20 30 -30 -20 -10 0 10 20 30
n k
1
5
0
128 0
-1
-60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60
n k
1
5
0
256 0
-1
-100 -50 0 50 100 -100 -50 0 50 100
n k F-7
Discrete Time Fourier Transform (Forward)
As N → ∞, the forward DFT converges to a function of the continuous frequency variable ω
that we will call the forward discrete time Fourier transform (DTFT)
N/2−1 ∞
2π
X X
x[n] e−j N kn −→ x[n] e−jωn = X(ω), −π ≤ω <π
n=−N/2 n=−∞
Analysis interpretation: The value of the DTFT X(ω) at frequency ω measures the similarity
of the infinite-length signal x[n] to the infinite-length sinusoid ejωn
F-8
Discrete Time Fourier Transform (Inverse)
Inverse unnormalized DFT
N/2−1
2π X 2π
x[n] = Xu [k] ej N kn
2πN
k=−N/2
In the limit as the signal length N → ∞, the inverse DFT converges in a more subtle way:
N/2−1 Z π
2π 2π X
ej N kn −→ ejωn , Xu [k] −→ X(ω), −→ dω, −→
N −π
k=−N/2
The core “basis functions” of the DTFT are the sinusoids ejωn with arbitrary frequencies ω
The DTFT can be derived as the limit of the DFT as the signal length N → ∞
The analysis/synthesis interpretation of the DFT holds for the DTFT, as do most of its properties
F-10
Eigenanalysis of
LTI Systems
(Infinite-Length Signals)
LTI Systems for Infinite-Length Signals
x H y
y = Hx
For infinite length signals, H is an infinitely large Toeplitz matrix with entries
[H]n,m = h[n − m]
Eigenvectors v are input signals that emerge at the system output unchanged (except for a
scaling by the eigenvalue λ) and so are somehow “fundamental” to the system
F-12
Eigenvectors of LTI Systems
Fact: The eigenvectors of a Toeplitz matrix (LTI system) are the complex sinusoids
λω cos(ωn)
4
cos(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
sω H λω sω
λω sin(ωn)
4
sin(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
F-13
Sinusoids are Eigenvectors of LTI Systems
sω H λω sω
Prove that harmonic sinusoids are the eigenvectors of LTI systems simply by computing the
convolution with input sω and applying the periodicity of the sinusoids (infinite-length)
∞
X ∞
X
sω [n] ∗ h[n] = sω [n − m] h[m] = ejω(n−m) h[m]
m=−∞ m=−∞
∞ ∞
!
X X
jωn −jωm −jωm
= e e h[m] = h[m] e ejωn
m=−∞ m=−∞
= λω sω [n] X
F-14
Eigenvalues of LTI Systems
The eigenvalue λω ∈ C corresponding to the sinusoid eigenvector sω is called the
frequency response at frequency ω since it measures how the system “responds” to sk
X∞
λω = h[n] e−jωn = hh, sω i = H(ω) (DTFT of h)
n=−∞
Recall properties of the inner product: λω grows/shrinks as h and sω become more/less similar
λω cos(ωn)
4
cos(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
sω H λω sω n
λω sin(ωn)
4
sin(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
F-15
Eigendecomposition and Diagonalization of an LTI System
x H y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
While we can’t explicitly display the infinitely large matrices involved, we can use the DTFT to
“diagonalize” an LTI system
Taking the DTFTs of x and h
∞
X ∞
X
X(ω) = x[n] e−jωn , H(ω) = h[n] e−jωn
n=−∞ n=−∞
we have that
Y (ω) = X(ω)H(ω)
and then Z π
dω
y[n] = Y (ω) ejωn
−π 2π
F-16
Summary
Complex sinusoids are the eigenfunctions of LTI systems for infinite-length signals
(Toeplitz matrices)
Therefore, the discrete time Fourier transform (DTFT) is the natural tool for studying LTI
systems for infinite-length signals
Frequency response H(ω) equals the DTFT of the impulse response h[n]
F-17
Discrete Time Fourier Transform
Examples
Discrete Time Fourier Transform
∞
X
X(ω) = x[n] e−jωn , −π ≤ω <π
n=−∞
Z π
dω
x[n] = X(ω) ejωn , ∞<n<∞
−π 2π
The Fourier transform of choice for analyzing infinite-length signals and systems
Useful for conceptual, pencil-and-paper work, but not Matlab friendly (infinitely long vectors)
F-19
Impulse Response of the Ideal Lowpass Filter (1)
The frequency response H(ω) of the ideal low-pass filter passes low frequencies (near ω = 0)
but blocks high frequencies (near ω = ±π)
(
1 −ωc ≤ ω ≤ ωc
H(ω) =
0 otherwise
H(ω)
1
0
−π −ωc 0 ωc π
ω
Compute the impulse response h[n] given this H(ω)
F-20
Impulse Response of the Ideal Lowpass Filter (2)
The frequency response H(ω) of the ideal low-pass filter passes low frequencies (near ω = 0)
but blocks high frequencies (near ω = ±π)
(
1 −ωc ≤ |ω| ≤ ωc
H(ω) =
0 otherwise
sin(ωc n)
h[n] = 2ωc
ωc n
h[n]
1
0.5
0
Example for M = 3
p[n]
1
0.5
0
-15 -10 -5 0 5 10 15
n
Forward DTFT
∞
X M
X
P (ω) = p[n] e−jωn = e−jωn ...
n=−∞ n=−M
F-22
DTFT of the Unit Pulse (2)
Apply the finite geometric series formula
∞ M M
X X X n ejωM − e−jω(M +1)
P (ω) = p[n] e−jωn = e−jωn = e−jω =
n=−∞
1 − e−jω
n=−M n=−M
This is an answer but it is not simplified enough to make sense, so we continue simplifying
2M +1 2M +1
ejωM − e−jω(M +1) e−jω/2 ejω 2 − e−jω 2
P (ω) = =
1 − e−jω
e−jω/2 ejω/2 − e−jω/2
2j sin ω 2M2+1
=
2j sin ω2
F-23
DTFT of the Unit Pulse (3)
Simplified DTFT of the unit pulse of duration Dx = 2M + 1 samples
sin 2M2+1 ω
P (ω) =
sin ω2
If p[n] is interpreted as the impulse response of the moving average system, then P (ω) is the
frequency response (eigenvalues) (low-pass filter)
p[n] P (ω)
1 1
0.5
0.5
0
0 -0.5
-15 -10 -5 0 5 10 15 −π − π2 0 π
2
π
n ω
F-24
DTFT of a One-Sided Exponential
Recall the impulse response of the recursive average system: h[n] = αn u[n], |α| < 1
Forward DTFT
∞ ∞ ∞
X X X 1
H(ω) = h[n] e−jωn = αn e−jωn = (α e−jω )n =
n=−∞ n=0 n=0
1 − α e−jω
0 0
-8 -6 -4 -2 0 2 4 6 8 −π − π2 0 π
2
π
n ω
F-25
Summary
F-26
Discrete Time Fourier Transform
of a Sinusoid
Discrete Fourier Transform (DFT) of a Harmonic Sinusoid
Thanks to the orthogonality of the length-N harmonic
√ sinusoids, it is easy to calculate the DFT
2π
of the harmonic sinusoid x[n] = sl [n] = ej N ln / N
N −1 2π
X e−j N kn
X[k] = sl [n] √ = hsl , sk i = δ[k − l]
n=0
N
s4 [n] S4 [k]
0.2 1
0 0.5
-0.2 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
n k
F-28
DTFT of an Infinite-Length Sinusoid
The calculation for the DTFT and infinite-length signals is much more delicate than for the DFT
and finite-length signals
Calculate the value X(ω) for the signal x[n] = ejω0 n at a frequency ω 6= ω0
∞
X ∞
X ∞
X
X(ω0 ) = x[n] e−jωn = ejω0 n e−jωn = e−j(ω−ω0 )n = ???
n=−∞ n=−∞ n=−∞
F-29
Dirac Delta Function (1)
One semi-rigorous way to deal with this quandary is to use the Dirac delta “function,” which is
defined in terms of the following limit process
Note that, for all values of the width , d (ω) always has unit area
Z
d (ω) dω = 1
F-30
Dirac Delta Function (2)
The safest way to handle a function like d (ω) is inside an integral, like so
Z
X(ω) d (ω) dω
F-31
Dirac Delta Function (3)
and Z
→0
X(ω) d (ω − ω0 ) dω −→ X(ω0 )
So we can think of d (ω) as a kind of “sampler” that picks out values of functions from inside an
integral
We describe the results of this limiting process (as → 0) as the Dirac delta “function” δ(ω)
F-32
Dirac Delta Function (4)
Dirac delta “function” δ(ω)
We write Z
X(ω) δ(ω) dω = X(0)
and Z
X(ω) δ(ω − ω0 ) dω = X(ω0 )
F-33
Scaled Dirac Delta Function
If we scale the area of d (ω) by L, then it has the following effect in the limit
Z
X(ω) L δ(ω) dω = L X(0)
F-34
And Now Back to Our Regularly Scheduled Program . . .
Rather than computing the DTFT of a sinusoid using the forward DTFT, we will show that an
infinite-length sinusoid is the inverse DTFT of the scaled Dirac delta function 2πδ(ω − ω0 )
Z π
dω
2πδ(ω − ω0 ) ejωn = ejω0 n
−π 2π
F-35
DTFT of Real-Valued Sinusoids
Since
1 jω0 n
+ e−jω0 n
cos(ω0 n) = e
2
we can calculate its DTFT as
DTFT
cos(ω0 n) ←→ π δ(ω − ω0 ) + π δ(ω + ω0 )
Since
1 jω0 n
− e−jω0 n
sin(ω0 n) = e
2j
we can calculate its DTFT as
DTFT π π
sin(ω0 n) ←→ δ(ω − ω0 ) + δ(ω + ω0 )
j j
F-36
Summary
The DTFT would be of limited utility if we could not compute the transform of an infinite-length
sinusoid
Hence, the Dirac delta “function” (or something else) is a necessary evil
The Dirac delta has infinite energy (2-norm); but then again so does an infinite-length sinusoid
F-37
Discrete Time Fourier Transform
Properties
Recall: Discrete-Time Fourier Transform (DTFT)
DTFT pair
DTFT
x[n] ←→ X(ω)
F-39
The DTFT is Periodic
We defined the DTFT over an interval of ω of length 2π, but it can also be interpreted as
periodic with period 2π
X(ω) = X(ω + 2πk), k ∈ Z
Proof
∞
X ∞
X
X(ω + 2πk) = x[n] e−j(ω+2πk)n = x[n] e−jωn e−j2πkn = X(ω) X
n=−∞ n=−∞
X(ω)
1
0.5
0
-0.5
−3π −2π −π 0 π 2π 3π
ω
F-40
DTFT Frequencies
∞
X
X(ω) = x[n] e−jωn , −π ≤ω <π
n=−∞
X(ω) measures the similarity between the time signal x and and a sinusoid ejωn of frequency ω
X(ω)
1
0.5
0
-0.5
−π − π2 0 π
2
π
ω
F-41
DTFT Frequencies and Periodicity
X(ω)
1
0.5
0
-0.5
−3π −2π −π 0 π 2π 3π
ω
F-42
DTFT Frequency Ranges
Periodicity of DTFT means every length-2π interval of ω carries the same information
∞
X
= e−jωm x[r] e−jωr = e−jωm X(ω) X
r=−∞
F-44
DTFT and Modulation
Remember that the DTFT is 2π-periodic, and so we can interpret the right hand side as
X((ω − ω0 )2π )
Proof:
∞
X ∞
X
ejω0 n x[n] e−jωn = x[n] e−j(ω−ω0 )n = X(ω − ω0 ) X
n=−∞ n=−∞
F-45
DTFT and Convolution
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
If
DTFT DTFT DTFT
x[n] ←→ X(ω), h[n] ←→ H(ω), y[n] ←→ Y (ω)
then
Y (ω) = H(ω) X(ω)
F-46
The DTFT is Linear
F-47
DTFT Symmetry Properties
The sinusoids ejωn of the DTFT have symmetry properties:
Re ejωn
= cos (ωn) (even function)
Im ejωn
= sin (ωn) (odd function)
Even signal/DFT
x[n] = x[−n], X(ω) = X(−ω)
Odd signal/DFT
x[n] = −x[−n], X(ω) = −X(−ω)
Proofs of the symmetry properties are identical to the DFT case; omitted here
F-48
DTFT Symmetry Properties Table
x[n] X(ω) Re(X(ω)) Im(X(ω)) |X(ω)| ∠X(ω)
F-49
Summary
F-50
z-Transform
Set G
z-Transform
z-Transform
The z-transform generalizes the Discrete-Time Fourier Transform (DTFT) for analyzing
infinite-length signals and systems
The theme this week is less linear algebra (vectors spaces) and more polynomial algebra
G-2
Recall: DTFT
Discrete-time Fourier transform (DTFT)
∞
X
X(ω) = x[n] e−jωn , −π ≤ω <π
n=−∞
Z π
dω
x[n] = X(ω) ejωn , ∞<n<∞
−π 2π
The core “basis functions” of the DTFT are the sinusoids ejωn with arbitrary frequencies ω
The sinusoids ejωn are eigenvectors of LTI systems for infinite-length signals
(infinite Toeplitz matrices)
G-3
Recall: Complex Sinusoids are Eigenvectors of LTI Systems
Fact: The eigenvectors of a Toeplitz matrix (LTI system) are the complex sinusoids
λω cos(ωn)
4
cos(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
sω H λω sω
λω sin(ωn)
4
sin(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
G-4
Recall: Eigenvalues of LTI Systems
The eigenvalue λω ∈ C corresponding to the sinusoid eigenvector sω is called the
frequency response at frequency ω since it measures how the system “responds” to sk
X∞
λω = h[n] e−jωn = hh, sω i = H(ω) (DTFT of h)
n=−∞
Recall properties of the inner product: λω grows/shrinks as h and sω become more/less similar
λω cos(ωn)
4
cos(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
sω H λω sω n
λω sin(ωn)
4
sin(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
G-5
Eigendecomposition and Diagonalization of an LTI System
x H y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
While we can’t explicitly display the infinitely large matrices involved, we can use the DTFT to
“diagonalize” an LTI system
Taking the DTFTs of x and h
∞
X ∞
X
X(ω) = x[n] e−jωn , H(ω) = h[n] e−jωn
m=−∞ m=−∞
we have that
Y (ω) = X(ω)H(ω)
G-6
Recall: Complex Exponential
n
zn = |z| ejω = |z|n ejωn = |z|n (cos(ωn) + j sin(ωn))
n
n n
n
Im(z ), |z| < 1 Im(z ), |z| > 1
4 2
2 0
0 -2
-2
-4
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
n n
G-7
Complex Exponentials are Eigenvectors of LTI Systems
Fact: A more general set of eigenvectors of a Toeplitz matrix (LTI system) are the
complex exponentials z n , z ∈ C
n -15 -10 -5 0 5 10 15
n
zn H λz z n
Im(λz z n ), |z| < 1
10
Im(z n ), |z| < 1
4 5
2
0
0
-2
-15 -10 -5 0 5 10 15
-5
n -15 -10 -5 0 5 10 15
n
G-8
Proof: Complex Exponentials are Eigenvectors of LTI Systems
zn h λz z n
Prove that complex exponentials are the eigenvectors of LTI systems simply by computing the
convolution with input z n
∞
X ∞
X
z n ∗ h[n] = z n−m h[m] = z n z −m h[m]
m=−∞ m=−∞
∞
!
X
= h[m] z −m zn
m=−∞
= λz z n X
G-9
Eigenvalues of LTI Systems
The eigenvalue λz ∈ C corresponding to the complex exponential eigenvector z n is called the
transfer function; it measures how the system “transfers” the input z n to the output
X∞
λz = h[n] z −n = H(z)
n=−∞
Recall properties of the inner product: λz grows/shrinks as h[n] becomes more/less similar to
(z −n )∗
Re(λz z n ), |z| < 1
Re(z n ), |z| < 1 5
4
2
0 0
-2
-4
-15 -10 -5 0 5 10 15 -5
n -15 -10 -5 0 5 10 15
n
zn H λz z n
Im(λz z n ), |z| < 1
10
Im(z n ), |z| < 1
4 5
2
0
0
-2
-15 -10 -5 0 5 10 15
-5
n -15 -10 -5 0 5 10 15
n
G-10
z-Transform
The core “basis functions” of the z-transform are the complex exponentials z n with arbitrary
z ∈ C; these are the eigenvectors of LTI systems for infinite-length signals
Notation abuse alert: We use X(·) to represent both the DTFT X(ω) and the z-transform
X(z); they are, in fact, intimately related
G-11
z-Transform as a Function
∞
X
X(z) = x[n] z −n
n=−∞
G-12
Eigendecomposition and Diagonalization of an LTI System
x H y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
While we can’t explicitly display the infinitely large matrices involved, we can use the z-transform
to “diagonalize” an LTI system
Taking the z-transforms of x and h
∞
X ∞
X
X(z) = x[n] z −n , H(z) = h[n] z −n
n=−∞ n=−∞
we have that
Y (z) = X(z) H(z)
G-13
Proof: Eigendecomposition and Diagonalization of an LTI System
x h y
∞ ∞
!
X X
−n
= x[m] h[n − m] z (let r = n − m)
m=−∞ n=−∞
∞ ∞ ∞ ∞
! ! !
X X X X
−r−m −m −r
= x[m] h[r] z = x[m]z h[r] z
m=−∞ r=−∞ m=−∞ r=−∞
= X(z) H(z) X
G-14
Summary
Complex exponentials z n are the eigenfunctions of LTI systems for infinite-length signals
(Toeplitz matrices)
Forward z-transform
∞
X
X(z) = x[n] z −n
n=−∞
Transfer function H(z) equals the z-transform of the impulse response h[n]
DTFT is a special case of the z-transform (values on the unit circle in the complex z-plane)
We have been a bit cavalier in our development regarding the convergence of the infinite sum
that defines X(z)
G-17
Example 1: z-Transform of αn u[n]
Signal x1 [n] = αn u[n], α ∈ C (causal signal)
0.5
0
-8 -6 -4 -2 0 2 4 6 8
n
Important: We can apply the geometric sum formula only when |α z −1 | < 1 or |z| > |α|
G-18
Region of Convergence
Given a time signal x[n], the region of convergence (ROC) of its z-transform
X(z) is the set of z ∈ C such that X(z) converges, that is, the set of z ∈ C such
DEFINITION
1
Example: For x1 [n] = αn u[n], α ∈ C, the ROC of X1 (z) = 1−α z −1 is all z such that |z| > |α|
G-19
Example 2: z-Transform of −αn u[−n − 1]
Signal x2 [n] = −αn u[−n − 1], α ∈ C (anti-causal signal)
-5
-8 -6 -4 -2 0 2 4 6 8
n
−1 −1 1 − α−1 z −α−1 z 1 z
= −1
+1 = −1
+ −1
= = =
1−α z 1−α z 1−α z 1 − α−1 z 1 − α z −1 z−α
ROC: |α−1 z| < 1 or |z| < |α|
G-20
Example Summary
x1 [n] = αn u[n] x2 [n] = −αn u[−n − 1]
causal signal anti-causal signal
1 0
0.5
-5
0
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
n n
1
X1 (z) = = X2 (z)
1 − α z −1
ROC ROC
|z| > |α| |z| < |α|
G-21
ROC: What We’ve Learned So Far
You must always state the ROC when you state a z-transform
G-22
Example 3: z-Transform of α1n u[n] − α2n u[−n − 1]
Signal x3 [n] = α1n u[n] − α2n u[−n − 1], α1 , α2 ∈ C
-1
-8 -6 -4 -2 0 2 4 6 8
n
1 1
= −1
+
1 − α1 z 1 − α2 z −1
ROC: |z| > |α1 | and |z| < |α2 |
G-23
Example 3: A Tale of Two ROCs
Signal x3 [n] = α1n u[n] − α2n u[−n − 1], α1 , α2 ∈ C
z-transform
1 1
X3 (z) = −1
+ , |α1 | < |z| < |α2 |
1 − α1 z 1 − α2 z −1
G-24
Properties of the ROC
The ROC is a connected annulus (doughnut) in the z-plane centered on the origin z = 0;
that is, the ROC is of the form r1 < |z| < r2
If x[n] has finite duration, then the ROC is the entire z-plane (except possibly z = 0 or z = ∞)
G-25
Summary
The region of convergence (ROC) of a z-transform is the set of z ∈ C such that it converges
The ROC is a connected annulus (doughnut) in the z-plane centered on the origin z = 0; that is,
the ROC is of the form r1 < |z| < r2
• If x[n] has finite duration, then the ROC is the entire z-plane (except possibly z = 0 or z = ∞)
• If x[n] is causal, then the ROC is the outside of a disk
• If x[n] is anti-causal, then the ROC is the inside of a disk
G-26
z-Transform
Transfer Function
Poles and Zeros
Table of Contents
G-28
z-Transform
Transfer Function
A General Class of LTI Systems
The most general class of practical causal LTI systems (for infinite-length signals) consists of the
cascade of a moving average system with a recursive average system
+
+
Key elements:
• Delays (z −1 )
• Scaling by {ai }, {bi }
• Summing
G-30
Input/Output Equation for General LTI System (Time Domain)
+
+
G-31
Input/Output Equation for General LTI System (z-Transform Domain)
+
+
G-32
Transfer Function of an LTI System
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
If
Z Z Z
x[n] ←→ X(z), h[n] ←→ H(z), y[n] ←→ Y (z)
then
Y (z) = H(z) X(z)
Transfer function
Y (z) output
H(z) = =
X(z) input
G-33
Transfer Function of General LTI System
+
+
Y (z) [1 + a1 z −1 + a2 z −2 + · · · + aN z −N ]
= X(z) [b0 + b1 z −1 + b2 z −2 + · · · bM z −M ]
Y (z) b0 + b1 z −1 + b2 z −2 + · · · bM z −M
H(z) = =
X(z) 1 + a1 z −1 + a2 z −2 + · · · + aN z −N
G-34
Rational Transfer Functions
The transfer function of the general LTI system is a rational function of polynomials in z −1
Y (z) b0 + b1 z −1 + b2 z −2 + · · · bM z −M
H(z) = =
X(z) 1 + a1 z −1 + a2 z −2 + · · · + aN z −N
zN zM
Can easily convert H(z) to positive powers of z by multiplying by zM zN
=1
zN b0 z M + b1 z M −1 + b2 z M −2 + · · · bM
Y (z)
H(z) = =
X(z) zM z N + a1 z N −1 + a2 z N −2 + · · · + aN
G-35
z-Transform
Poles and Zeros
Poles and Zeros
Factor the numerator and denominator polynomials (roots command in Matlab)
b0 z M + b1 z M −1 + b2 z M −2 + · · · bM
N
Y (z) z
H(z) = =
X(z) z M z N + a1 z N −1 + a2 z N −2 + · · · + aN
zN
(z − ζ1 )(z − ζ2 ) · · · (z − ζM )
=
zM (z − p1 )(z − p2 ) · · · (z − pN )
+
+
1 + 3 z −1 + 11/4 z −2 + 3/4 z −3
2 3
Y (z) z 1 z + 3 z 2 + 11/4 z + 3/4
H(z) = = =
X(z) z 3 2
z − 1 z + 1/2 1 − 1 z −1 + 1/2 z −2
G-38
Example: Poles and Zeros (2)
+
+
G-39
Example: Poles and Zeros (3)
We can obtain the frequency response by evaluating the transfer function H(z) on the unit
circle (freqz in Matlab)
|H(ω)|
20
+
+
10
0
−π − π2 0 π
2
π
ω
G-40
Sketching |H(z)| based on the Poles and Zeros
Given the poles and zeros, there is an elegant geometrical interpretation of the value of |H(z)|
b0 z M + b1 z M −1 + b2 z M −2 + · · · bM
N
z
|H(z)| =
z M z N + a1 z N −1 + a2 z N −2 + · · · + aN
z N |z − ζ1 | |z − ζ2 | · · · |z − ζM |
=
z M |z − p1 | |z − p2 | · · · |z − pN |
Therefore
product of distances from z to all zeros
|H(z)| =
product of distances from z to all poles
G-41
Example: Poles and Zeros (4)
|H(ω)|
20
+
+
10
0
−π − π2 0 π
2
π
ω
G-42
LTI System Design = Pole and Zero Placement
Locations of the poles and zeros characterize an LTI system
Hence, design of an LTI system is equivalent to designing where to place its poles and zeros
G-43
FIR Filters Have Only Zeros
Y (z)
H(z) = = b0 + b1 z −1 + b2 z −2 + · · · bM z −M
X(z)
= (z − ζ1 )(z − ζ2 ) · · · (z − ζM )
G-44
Summary
The most general class of practical causal LTI systems (for infinite-length signals) consists of the
cascade of a moving average system with a recursive average system
Y (z)
Transfer function H(z) = X(z)
G-45
z-Transform Properties
Recall: Forward z-Transform
z-transform pair
Z
x[n] ←→ X(z)
G-47
z-Transform and DTFT
Forward z-transform of the signal x[n]
∞
X
X(z) = x[n] z −n , z ∈ C, z ∈ ROC
n=−∞
The DTFT is a special case of the z-transform (values on the unit circle in the complex z-plane)
G-48
z-Transform is Linear
G-49
z-Transform and Time Shift
If x[n] and X(z) are a z-transform pair then
Z
x[n − m] ←→ z −m X(z)
∞
X
= z −m x[r] z −r = z −m X(z) X
r=−∞
G-50
z-Transform and Modulation
Proof:
∞
X ∞
X
z0n x[n] z −n = x[n] (z/z0 )−n = X(z/z0 ) X
n=−∞ n=−∞
G-51
z-Transform and Conjugation
Proof: !∗
∞
X ∞
X
∗ −n ∗ −n
x [n] z = x[n] (z ) = X ∗ (z ∗ ) X
n=−∞ n=−∞
G-52
z-Transform and Time Reversal
The ROC is “inverted” in the sense that if the ROC of X(z) is r1 < |z| < r2 ,
then the ROC of X(z −1 ) is 1/r2 < |z| < 1/r1
G-53
z-Transform and Convolution
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
If
Z Z Z
x[n] ←→ X(z), h[n] ←→ H(z), y[n] ←→ Y (z)
then
Y (z) = H(z) X(z), ROCY = ROCX ∩ ROCH
G-54
z-Transform, BIBO Stability, and Causality
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Fact: An LTI system is BIBO stable iff the ROC of H(z) includes the unit circle |z| = 1
G-55
Proof: z-Transform and BIBO Stability (1)
x h y
Recall that the ROC of H(z) is defined as the set of z ∈ C such that h[n] z −n is absolutely
summable
X∞
S(z) = |h[n]| |z −n | < ∞
n=−∞
Suppose that the system is BIBO stable, which implies that khk1 < ∞. What can we say about
the ROC?
P∞
• When |z| = 1, we see that S(z)||z|=1 = n=−∞ |h[n]| < ∞ since khk1 < ∞
G-56
Proof: z-Transform and BIBO Stability (1)
x h y
Recall that the ROC of H(z) is defined as the set of z ∈ C such that h[n] z −n is absolutely
summable
X∞
S(z) = |h[n]| |z −n | < ∞
n=−∞
Suppose that the ROC of H(z) includes the unit circle. What can we say about h[n] and BIBO
stability?
P∞
• Since the ROC of H(z) includes the unit circle, S(1) < ∞. But S(1) = n=−∞ |h[n]| = khk1
G-57
z-Transform, Poles, and BIBO Stability
x h y
Fact: An LTI system is BIBO stable iff the ROC of H(z) includes the unit circle |z| = 1
Corollary: A causal LTI system is BIBO stable iff all of its poles are inside the unit circle
• Since the systems is causal, the ROC extends outward from the pole that is furthest from the origin
• Since the ROC contains the unit circle |z| = 1, the pole p furthest from the origin must have |p| < 1
G-58
Useful z-Transforms
δ[n] 1 z∈C
1
u[n] 1−z −1 |z| > 1
1
αn u[n] 1−αz −1 |z| > |α|
1
−αn u[−n − 1] 1−αz −1 |z| < |α|
G-59
Summary
An LTI system is BIBO stable iff the ROC of H(z), the z-transform of the impulse response h[n],
includes the unit circle |z| = 1
A causal LTI system is BIBO stable iff all of its poles are inside the unit circle
G-60
Inverse z-Transform
Recall: Forward z-Transform
G-62
Inverse z-Transform via Complex Integral
There exists a similar formula for the inverse z-transform via a contour integral in the complex
z-plane I
dz
x[n] = X(z) z n
C j2πz
Evaluation of such integrals is fun, but beyond the scope of this course
G-63
Inverse z-Transform via Factorization
Any rational z-transform X(z) can be factored into its zeros and poles
(z − ζ1 )(z − ζ2 ) · · · (z − ζM ) (1 − ζ1 z −1 )(1 − ζ2 z −1 ) · · · (1 − ζM z −1 )
X(z) = = z N −M
(z − p1 )(z − p2 ) · · · (z − pN ) (1 − p1 z −1 )(1 − p2 z −1 ) · · · (1 − pN z −1 )
1
Inverting the z-transform for one pole 1−p1 z −1 is easy (assume causal inverse)
Z 1
pn1 u[n] ←→
1 − p1 z −1
Fortunately, there is a method to decompose X(z) into a sum, rather than a product, of poles
C1 C2 CN
X(z) = + + ··· +
1 − p1 z −1 1 − p2 z −1 1 − pN z −1
Then we can just invert the z-transform of each term separately and sum the results
G-64
Partial Fraction Expansion
The partial fraction expansion decomposes a rational function of z −1
polynomial in z −1
X(z) =
(1 − p1 z −1 )(1
− p2 z −1 ) · · · (1 − pN z −1 )
There are many algorithms to compute a partial fraction expansion; here we introduce one of the
simplest using a running example
Note: We will explain partial fractions only for non-repeated poles; see the supplementary
material for the case when there are repeated poles (not difficult!)
G-65
Partial Fraction Expansion Algorithm – Step 1
−1 + 3z −1
X(z) = 6 + = 6 + X 0 (z)
1 − 16 z −1 − 16 z −2
We will work on X 0 (z) and then recombine it with the 6 at the end
G-66
Partial Fraction Expansion Algorithm – Step 2
Step 2: Factor the denominator polynomial into poles (no need to factor the numerator)
−1 + 3z −1 −1 + 3z −1
X 0 (z) = =
1 − 16 z −1 − 16 z −2 (1 − 12 z −1 )(1 + 31 z −1 )
G-67
Partial Fraction Expansion Algorithm – Step 3
Step 3: Assuming that no poles are repeated (same location in the z-plane), break X 0 (z) into a
sum of terms, one for each pole
−1 + 3z −1 C1 C2
X 0 (z) = 1 −1 1 −1 = 1 −1 +
(1 − 2 z )(1 + 3 z ) 1 − 2z 1 + 13 z −1
Note: Repeated poles just require that we add extra terms to the sum. The details are not
difficult – see the Supplementary Resources
G-68
Partial Fraction Expansion Algorithm – Step 4
Step 4: To determine C1 and C2 , bring the sum of terms back to a common denominator and
compare its terms to terms of the same power in X 0 (z)
−1 + 3z −1 C1 C2
X 0 (z) = 1 −1 1 −1 = 1 −1 +
(1 − 2z )(1
+ 3z ) 1 − 2z 1 + 13 z −1
powers of 1 : −1 = C1 + C2
C1 C2
powers of z −1 : 3 = −
3 2
Step 5: Check your work by bringing the partial fractions result to a common demoninator and
making sure it agrees with what you started with
3 −4
X(z) = 6 + 1 −1 +
1− 2z 1 + 13 z −1
5 + 2z −1 − z −2
= X
1 − 16 z −1 − 16 z −2
G-70
Back To Our Regularly Scheduled Program . . .
Thanks to the partial fractions expansion, we can write
5 + 2z −1 − z −2 3 −4
X(z) = 1 −1 1 −2 = 6 + 1 −1 +
1 − 6z − 6z (1 − 2 z ) (1 + 13 z −1 )
0
-5 0 5 10
n
G-71
Summary
In practice, we invert the z-transform using the partial fraction expansion and the inverting
term-by-term
G-72
z-Transform
Examples
z-Transform Tools in Matlab
There are many useful commands in Matlab related to the z-transform. Click here to view a
video demonstration.
• freqz – to plot the DTFT frequency response corresponding to a z transform transfer function
• ···
G-74
Example LTI System
+
+
G-75
Discrete-Time Filters
Set H
Discrete-Time Filters
Putting LTI Systems to Work
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Key questions:
H-2
What Do LTI Systems Do? Recall Eigenanalysis
LTI system eigenvectors: sω [n] = ejωn
P∞
LTI system eigenvalues: λω = H(ω) = n=−∞ h[n] e−jωn (frequency response)
λω cos(ωn)
4
cos(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
sω h λω sω
λω sin(ωn)
4
sin(ωn) 2
1
0 0
-1
-8 -6 -4 -2 0 2 4 6 -2
n
-4
-8 -6 -4 -2 0 2 4 6
n
H-3
LTI Systems Filter Signals
x h y
Rπ dω
Rπ dω
x[n] = −π
X(ω) ejωn 2π h y[n] = −π
H(ω) X(ω) ejωn 2π
An LTI system processes a signal x[n] by amplifying or attenuating the sinusoids in its Fourier
representation (DTFT) X(ω) by the complex factor H(ω)
H-4
Design Parameters of Discrete-Time Filters (LTI Systems)
x H y
+
Frequency response: H(ω) +
H-5
Filters Archetypes: Low-Pass
Ideal low-pass filter
h[n]
1
0.5
Example low-pass impulse response h[n]
0
-5 0 5 10
n
|H(ω)|
2
1
Example frequency response |H(ω)|
0
−π − π2 0 π
2
π
ω
H-6
Filters Archetypes: High-Pass
Ideal high-pass filter
h[n]
1
0
Example high-pass impulse response h[n]
-1
-5 0 5 10
n
|H(ω)|
2
1
Example frequency response |H(ω)|
0
−π − π2 0 π
2
π
ω
H-7
Filters Archetypes: Band-Pass
Ideal band-pass filter
h[n]
1
0
Example band-pass impulse response h[n]
-1
-5 0 5 10
n
|H(ω)|
2
Example frequency response |H(ω)|
0
−π − π2 0 π
2
π
ω
H-8
Filters Archetypes: Band-Stop
Ideal band-stop filter
h[n]
1
0
Example band-stop impulse response h[n]
-1
-5 0 5 10
n
|H(ω)|
2
1
Example frequency response |H(ω)|
0
−π − π2 0 π
2
π
ω
H-9
Summary
Now that we understand what LTI systems do, we can design them to accomplish certain tasks
An LTI system processes a signal x[n] by amplifying or attenuating the sinusoids in its Fourier
representation (DTFT)
We will emphasize infinite-length signals, but the situation is similar for finite-length signals
H-10
Discrete-Time Filter Design
Recall Discrete-Time Filter
x h y
∞
X
y[n] = x[n] ∗ h[n] = h[n − m] x[m]
m=−∞
Recall the filter archetypes: ideal low-pass, high-pass, band-pass, band-stop filters
H-12
Ideal Lowpass Filter
(
1 −ωc ≤ ω ≤ ωc
Ideal low-pass filter frequency response H(ω) =
0 otherwise
|H(ω)|
1
0
−π − π2 0 π
2
π
ω
H-13
Ideal Lowpass Filter
(
1 −ωc ≤ ω ≤ ωc
Ideal low-pass filter frequency response H(ω) =
0 otherwise
h[n]
1
0.5
0
Problems:
P
• System is not BIBO stable! ( n |h[n]| = ∞)
• Infinite computational complexity (H(z) is not a rational function)
H-14
Filter Specification
Find a filter of minimum complexity that meets a given specification
Clearly, the tighter the specs, the more complex the filter
H-15
Two Classes of Discrete-Time Filters
+
+
Infinite impulse response (IIR) filters
• Uses both moving and recursive averages
• H(z) has both poles and zeros
• Related to “analog” filter design using resistors, capacitors, and inductors
• Generally have the lowest complexity to meet a given spec
H-16
Summary
Filter design: Find a filter of minimum complexity that meets a given spec
Two different types of filters (IIR, FIR) mean two different types of filter design
H-17
IIR Filter Design
IIR Filters
+
+
Y (z) b0 + b1 z −1 + b2 z −2 + · · · bM z −M
H(z) = =
X(z) 1 + a1 z −1 + a2 z −2 + · · · + aN z −N
(z − ζ1 )(z − ζ2 ) · · · (z − ζM )
= z N −M
(z − p1 )(z − p2 ) · · · (z − pN )
We design an IIR filter by specifying the locations of its poles and zeros in the z-plane
Generally can satisfy a spec with lower complexity than FIR filters
H-19
IIR Filters from Analog Filters
In contrast to FIR filter design, IIR filters are typically designed by a two-step procedure that is
slightly ad hoc
Step 1: Design an analog filter (for resistors, capacitors, and inductors) using the Laplace
transform HL (s) (this theory is well established but well beyond the scope of this course)
Step 2: Transform the analog filter into a discrete-time filter using the bilinear transform
(a conformal map from complex analysis)
z−1
s = c
z+1
H-20
Three Important Classes of IIR Filters
Butterworth filters
• butter command in Matlab
• No ripples (oscillations) in |H(ω)|
• Gentlest transition from pass-band to stop-band for a given order
Chebyshev filters
• cheby1 and cheby2 commands in Matlab
• Ripples in either pass-band or stop-band
H-21
Butterworth IIR Filter
“Maximally flat” frequency response
• Largest number of derivatives of |H(ω)|
h[n]
equal to 0 at ω = 0 and π 0.4
0.2
N zeros and N poles 0
-0.2
• Zeros are all at z = −1 -5 0 5 10 15
• Poles are located on a circle inside the unit n
circle |H(ω)|
1
Example: N = 6 using butter command in
Matlab
0
1
−π − π2 0 π
2
π
0.8
ω
0.6
0.4 6 H(ω)
0.2 π
Im(z)
0
0
-0.2
-0.4 −π
−π − π2 0 π
2
π
-0.6
-0.8
ω
-1
-1 -0.5 0 0.5 1
Re(z)
H-22
Chebyshev Type 1 IIR Filter
Ripples/oscillations (of equal amplitude) in
the pass-band and not in the stop-band 0.4
h[n]
N zeros and N poles 0.2
0
• Zeros are all at z = −1 -0.2
-5 0 5 10 15
• Poles are located on an
n
ellipse inside the unit circle
|H(ω)|
Example: N = 6 using cheby1 command in 1
Matlab
1
0
0.8 −π − π2 0 π
2
π
0.6 ω
0.4
0.2
6 H(ω)
Im(z)
π
0
-0.2 0
-0.4
−π
-0.6 −π − π2 0 π
π
2
-0.8
ω
-1
-1 -0.5 0 0.5 1
Re(z)
H-23
Chebyshev Type 2 IIR Filter
Ripples/oscillations (of equal amplitude) in
the stop-band and not in the pass-band 0.4
h[n]
N zeros and N poles 0.2
0
• Zeros are distributed on unit circle -0.2
-5 0 5 10 15
• Poles are located on an
n
ellipse inside the unit circle
|H(ω)|
Example: N = 6 using cheby2 command in 1
Matlab
1
0
0.8 −π − π2 0 π
2
π
0.6 ω
0.4
0.2
6 H(ω)
Im(z)
π
0
-0.2 0
-0.4
−π
-0.6 −π − π2 0 π
π
2
-0.8
ω
-1
-1 -0.5 0 0.5 1
Re(z)
H-24
Elliptic IIR Filter
Ripples/oscillations in both the stop-band and
pass-band 0.4
h[n]
N zeros and N poles 0.2
0
• Zeros are clustered on unit circle near ωp -0.2
-5 0 5 10 15
• Poles are clustered close to unit circle
n
near ωp
|H(ω)|
Example: N = 6 using ellip command in 1
Matlab
1
0
0.8 −π − π2 0 π
2
π
0.6 ω
0.4
0.2
6 H(ω)
Im(z)
π
0
-0.2 0
-0.4
−π
-0.6 −π − π2 0 π
π
2
-0.8
ω
-1
-1 -0.5 0 0.5 1
Re(z)
H-25
IIR Filter Comparison
Butterworth (black), Chebyshev 1 (blue), Chebyshev 2 (red), Elliptic (green)
|H(ω)|
1
0
−π − π2 0 π
2
π
ω
6 H(ω)
π
−π
−π − π2 0 π
2
π H-26
IIR Filter Phase Shift
Note how the elliptic filter’s phase response is non-linear across the passband.
Non-linear phase shifts introduces distortion. Consider signal x[n], a sum of two sinusoids:
x[n]
1
0.5
-0.5
-1
0 10 20 30 40 50 60 70 80 90 100
n
Both are in the pass-band, so the output y[n] should be the same as the input. But Non-linear
phase shift puts the two sinusoids out of alignment:
y[n]
1
0.5
-0.5
-1
0 10 20 30 40 50 60 70 80 90 100
n H-27
Summary
IIR filters use use both moving and recursive averages and have both poles and zeros
Typically designed by transforming an analog filter design (for use with resistors, capacitors, and
inductors) into discrete-time via the bilinear transform
Useful Matlab commands for choosing the filter order N that meets a given spec:
butterord, cheby1ord, cheby2ord, ellipord
H-28
FIR Filter Design
FIR Filters
Y (z)
H(z) = = b0 + b1 z −1 + b2 z −2 + · · · bM z −M
X(z)
= z −M (z − ζ1 )(z − ζ2 ) · · · (z − ζM )
Generally require a higher complexity to meet a given spec than an IIR filter
H-30
FIR Filters Are Interesting
Unlike IIR filters and all analog filters, FIR filters can have (generalized) linear phase
• A nonlinear phase response ∠H(ω) distorts signals as they pass through the filter
• Recall that a linear phase shift in the DTFT is equivalent to a simple time shift in the time domain
H-31
Impulse Response of an FIR Filter
We will focus here on low-pass filters with odd-length, even-symmetric impulse response
• Odd length: M + 1 is odd (M is even)
• Even symmetric (around the center of the filter): h[n] = h[M − n], n = 0, 1, . . . , M
Example: Length M + 1 = 21
h[n]
10
5
0
0 2 4 6 8 10 12 14 16 18 20
n
H-33
Frequency Response of a Symmetric FIR Filter (1)
Compute frequency response when h[n] is odd-length and even-symmetric (h[n] = h[M − n])
M M/2−1 M
X X X
H(ω) = h[n] e−jωn = h[n] e−jωn + h[M/2] e−jωM/2 + h[n] e−jωn
n=0 n=0 n=M/2+1
M/2−1 M
X X
= h[n] e−jωn + h[M/2] e−jωM/2 + h[M − n] e−jωn
n=0 n=M/2+1
M/2−1 M/2−1
X X
−jωn −jωM/2
= h[n] e + h[M/2] e + h[r] e−jω(M −r)
n=0 r=0
M/2−1
X
= h[M/2] e−jωM/2 + h[n] e−jωn + ejω(n−M )
n=0
H-34
Frequency Response of a Symmetric FIR Filter (2)
Compute frequency response when h[n] is odd-length and even-symmetric (h[n] = h[M − n])
M/2−1
X
−jωM/2
H(ω) = h[M/2] e + h[n] e−jωn + ejω(n−M )
n=0
M/2−1
X
= h[M/2] e−jωM/2 + h[n] e−jωM/2 e−jω(n−M/2) + ejω(n−M/2)
n=0
M/2−1
X
= h[M/2] + 2 h[n] cos(ω(n − M/2)) e−jωM/2
n=0
= A(ω) e−jωM/2
H-35
Generalized Linear Phase FIR Filters
Frequency response when h[n] is odd-length and even-symmetric (h[n] = h[M − n])
with
M/2−1
X
A(ω) = h[M/2] + 2 h[n] cos(ω(n − M/2))
n=0
A(ω) is called the amplitude of the filter; it plays a role like |H(ω)| since
|H(ω)| = |A(ω)|
H-36
FIR Filter Design
Frequency response when h[n] is odd-length and even-symmetric (h[n] = h[M − n])
with
M/2−1
X
A(ω) = h[M/2] + 2 h[n] cos(ω(n − M/2))
n=0
H-37
Optimal FIR Filter Design
Goal: Find the optimal A(ω) (in terms of shortest length M + 1) that meets the specs
M/2−1
X
A(ω) = h[M/2] + 2 h[n] cos(ω(n − M/2))
n=0
Parameters under our control: The M/2 + 1 filter taps h[n], n = 0, 1, . . . , M/2
H-38
Key Ingredients of Optimal FIR Filter Design
Goal: Find the optimal A(ω) (in terms of shortest length M + 1) that meets the specs
M/2−1
X
A(ω) = h[M/2] + 2 h[n] cos(ω(n − M/2))
n=0
H-39
Remez Exchange Algorithm for Optimal FIR Filter Design
Goal: Find the optimal A(ω) (in terms of shortest length M + 1) that meets the specs
M/2−1
X
A(ω) = h[M/2] + 2 h[n] cos(ω(n − M/2))
n=0
Alternation Theorem: The optimal A(ω) will touch the error bounds M/2 + 2 times in the
interval 0 ≤ ω < π
Parks and McClellan proposed the Remez Exchange Algorithm to find the h[n] such that A(ω)
satisfies the alternation theorem
Matlab command firpm and firpmord (be careful with the parameters)
H-40
Example 1: Optimal FIR Filter Design (1)
Optimal low-pass filter of length M + 1 = 21 with ωp = 0.30π, ωs = 0.35π
0.8
0.6
0.4
1
0.2
0.8
0 0.6
−π − π2 0 π
2
π 0.4
ω 0.2
Im(z)
0
h[n] -0.2
0.3 -0.4
0.2 -0.6
0.1
-0.8
0
-1
-0.1
0 2 4 6 8 10 12 14 16 18 20 -1 -0.5 0 0.5 1 1.5
n Re(z)
H-41
Example 1: Optimal FIR Filter Design (2)
Optimal low-pass filter of length M + 1 = 21 with ωp = 0.30π, ωs = 0.35π
10−1
10−2
1
0.8
0.6
−π − π2 0 π
2
π 0.4
ω 0.2
Im(z)
0
h[n] -0.2
0.3 -0.4
0.2 -0.6
0.1
-0.8
0
-1
-0.1
0 2 4 6 8 10 12 14 16 18 20 -1 -0.5 0 0.5 1 1.5
n Re(z)
H-42
Example 2: Optimal FIR Filter Design (1)
Optimal low-pass filter of length M + 1 = 101 with ωp = 0.30π, ωs = 0.35π
0.8
0.6
0.4
1
0.2
0.8
0.6
0
−π − π2 0 π
2
π 0.4
0.2
ω
Im(z)
0
h[n] -0.2
0.3 -0.4
0.2 -0.6
0.1
-0.8
0
-0.1 -1
n Re(z)
H-43
Example 2: Optimal FIR Filter Design (2)
Optimal low-pass filter of length M + 1 = 101 with ωp = 0.30π, ωs = 0.35π
10−2
1
10−4
0.8
0.6
−π − π2 0 π
2
π 0.4
0.2
ω
Im(z)
0
h[n] -0.2
0.3 -0.4
0.2 -0.6
0.1
-0.8
0
-0.1 -1
n Re(z)
H-44
Matlab Example: Optimal FIR Filter Design
• Length M + 1 = 101
• ωp = π/3
• ωs = π/2
H-45
Summary
FIR filters correspond to a moving average and have only zeros (no poles)
FIR filters are specific to discrete-time; they cannot be built in analog using R, L, C
Symmetrical FIR filters have (generalized) linear phase, which is impossible with IIR or analog
filters
Design optimal FIR filters using the Parks-McClellan algorithm (Remez exchange algorithm)
FIR filters are always BIBO stable and very numerically stable (to coefficient quantization, etc.)
Generally require a higher complexity to meet a given spec than an IIR filter, but the benefits can
outweigh the computational cost
H-46
Inverse Filter
and Deconvolution
LTI Signal Degradations
In many important applications, we do not observe the signal of interest x but rather a version y
processed by an LTI system with impulse response g
x g y
Examples:
• Digital subscriber line (DSL) communication (long wires)
• Echos in audio signals
• Camera blur due to misfocus or motion (2D)
• Medical imaging (CT scans), . . .
Goal: Ameliorate the degradation by passing y through a second LTI system in the hopes that
we can “cancel out” the effect of the first such that x
b=x
x g y h x
b
H-48
LTI Signal Degradations in the z-Transform Domain
Goal: Ameliorate the degradation by passing y through a second LTI system in the hopes that
we can “cancel out” the effect of the first such that x
b=x
x g y h x
b
X(z)
b = H(z) Y (z) = H(z) G(z) X(z)
H-49
Inverse Filter – Poles and Zeros
If the degradation filter G(z) is a rational function with zeros {ζi } and poles {pj }
(z − ζ1 )(z − ζ2 ) · · · (z − ζM )
G(z) = z N −M
(z − p1 )(z − p2 ) · · · (z − pN )
then the inverse filter H(z) is a rational function with zeros {pj } and poles {ζi }
1 (z − p1 )(z − p2 ) · · · (z − pN )
H(z) = = z M −N
G(z) (z − ζ1 )(z − ζ2 ) · · · (z − ζM )
Assuming that G(z) and H(z) are causal, if any of the zeros of G(z) are outside the unit circle,
then H(z) is not BIBO stable, which means that the inverse filter does not exist
When G(z) is causal and all of its zeros are inside the unit circle, we say that it has minimum
phase; in this case an exact inverse filter H(z) exists
H-50
Example: Exact Inverse Filter
x[n] y[n]
1
4
0.5 2
G 0
0 -2
-5 0 5 10 15 20 -5 0 5 10 15 20
n n
y[n] x̂[n]
1
4
2 0.5
0 H
-2 0
-5 0 5 10 15 20 -5 0 5 10 15 20
n n
1
G(ω) (blue) and H(ω) = G(ω) (red) G(z) H(z)
1 1
0.8 0.8
0.6 0.6
6
0.4 0.4
0.2 0.2
4
Im(z)
Im(z)
0 0
2 -0.2 -0.2
-0.4 -0.4
0 -0.6 -0.6
−π − π2 0 π
2
π -0.8 -0.8
ω -1 -1
1
We can still find an approximate inverse filter by regularizing G(z) ; for example
1
Ha (z) =
G(z) + r
Typically we try to choose the smallest r such that Ha (z) is BIBO stable
H-52
Example: Approximate Inverse Filter
x[n] y[n]
1
4
0.5 2
G 0
0 -2
-5 0 5 10 15 20 -5 0 5 10 15 20
n n
y[n] x̂[n]
10 1.5
1
5 0.5
0
Ha 0
-0.5
-5 0 5 10 15 20 -5 0 5 10 15 20
n n
1
G(ω) (blue) and Ha (ω) = 1
G(ω)+ 16
(red) G(z) Ha (z)
1 1
0.8 0.8
40 0.6 0.6
0.4 0.4
30
0.2 0.2
Im(z)
Im(z)
20 0 0 2
-0.2 -0.2
10
-0.4 -0.4
0 -0.6 -0.6
−π − π2 0 π
2
π -0.8 -0.8
ω -1 -1
x g y h x
b
Best case: When G(z) is causal and all of its zeros are inside the unit circle, we say that it has
minimum phase; in this case an exact inverse filter H(z) exists
Advanced topics beyond the scope of this course: blind deconvolution, adaptive filters (LMS alg.)
H-54
Matched Filter
Inner Product and Cauchy Schwarz Inequality
Recall the inner product (or dot product) between two signals x, y
(whether finite- or infinite-length)
X
hy, xi = y[n] x[n]∗
n
H-56
Signal Detection Using Inner Product
We can determine if a target signal x of interest is present within a given signal y simply by
computing the inner product and comparing it to a threshold t > 0
(
≥ t signal is present
|d| = |hy, xi|
< t signal is not present
(Aside: In certain useful cases, this is the optimal way to detect a signal)
0.5
0
-5 0 5 10 15 20 25 30
n
y[n] = x[n] + e[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
H-57
Signal Detection With Unknown Shift
In many important applications, the target is time-shifted by some unknown amount `
0.5
0
-5 0 5 10 15 20 25 30
n
y[n] = x[n − 15] + e[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
H-58
Solution: Signal Detection With Unknown Shift
In many important applications, the target is time-shifted by some unknown amount `
y[n] = x[n − `] + e[n]
Solution: Compute inner product between y and shifted target signal x[n − m] for all m ∈ Z
|d[m]| = hy[n], x[n − m]i
In statistics, d[m] is called the cross-correlation; it provides both
• The detection statistic to compare against the threshold t for each value of shift m
• An estimate for ` (the m the maximizes d[n])
In words, the cross-correlation d[m] equals the convolution of y[n] with the time-reversed and
conjugated target signal xe[n] = x∗ [−n]
y x∗ [−n] d
H-60
Example: Matched Filter
Square pulse shifted by ` = 15: y[n] = x[n − 15] + e[n]
x[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
y[n] = x[n − 15] + e[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
d[n]
4
2
0
-5 0 5 10 15 20 25 30
n
H-61
Application: Radar Imaging (1)
In a radar system, the time delay ` is linearly proportional to 2× the distance between the
antenna and the target
x[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
y[n] = x[n − 15] + e[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
d[n]
4
2
0
-5 0 5 10 15 20 25 30
n
H-62
Application: Radar Imaging (2)
In a radar system, the time delay ` is linearly proportional to 2× the distance between the
antenna and the target
x[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
y[n] = x[n − 15] + e[n]
1
0.5
0
-5 0 5 10 15 20 25 30
n
d[n]
4
2
0
-5 0 5 10 15 20 25 30
n
H-63
Summary
Inner product and Cauchy Schwarz Inequality provide a natural way to detect a target signal x
embedded in another signal y
• Compare magnitude of inner product to a threshold
Cross-correlation can be interpreted as the convolution of the signal y with a time-reversed and
conjugated version of x: the matched filter
H-64