Digital Signal Processing
Digital Signal Processing
( DSP )
Digital Signal Processing
COURSE FILE
(A.Y.2021-2022)
(Academic Regulations -2018)
1. Cover Page
2. Vision of the Institute
3. Mission of the Institute
4. Vision of the Department
5. Mission of the Department
6. PEOs, POs and PSOs
7. Syllabus copy (scanned copy from the syllabus book)
8. Course objectives and Outcomes
9. Brief note on the course & how it fits in to the curriculum
10. Prerequisites, if any.
11. Instructional Learning Outcomes
12. Course mapping with PEOs, POs and PSOs.
13. Lecture plan with methodology being used/adopted.
14. Assignment questions (Unit wise)
15. Tutorial problems (Unit wise)
16. a) Unit wise short and long answer question bank
b) Unit wise Quiz Questions
17. Detailed notes##
18. Additional topics, if any.
19. Known gaps, if any.
20. Discussion topics, if any.
21. University/Autonomous Question papers of previous years.
22. References, Journals, websites and E-links, if required.
23. Quality Control Sheets. (to be submitted at the end of the semester)
a. Course end survey
b. Feedback on Teaching Learning Process(TLP)
c. CO- attainment
24. Student List (can be submitted later)
25. Group-Wise students list for discussion topics (can be submitted later)
Branch: ECE
Year: III ECE Document No. GCET/ECE/----/-----
Semester: II No. of pages :232
Prepared by Updated by
2) Sign : 2) Sign :
2) Sign : 2) Sign :
4) Date : 4) Date :
2) Sign :
3) Date :
II. To train students with problem solving capabilities such as analysis and design with
adequate practical skills wherein they demonstrate creativity and innovation that
would enable them to develop state of the art equipment and technologies of
multidisciplinary nature for societal development.
7. Syllabus:
1. Understand fundamental concepts involved in the analysis and processing of discrete signals. 2.
Distinguish between various discrete -time signals and Systems.
3. Understand frequency domain analysis of discrete signals and systems using DTFT, DFT and
FFT tools.
4. Understand the design of Infinite Impulse Response (IIR) and Finite Impulse Response (FIR)
filters for a given specifications.
5. Understand Multi-rate signal processing Techniques and finite word length effects.
Course Outcomes:
CO1. Perform analysis on discrete time signals and systems in the frequency domain using DFS,
DTFT and Z transform
CO2. Compute the DFT of a given discrete time sequence and plot the spectrum respectively.
CO3. Compute radix-2 FFT for a given sequence.
CO4. Design IIR and FIR filters for given specifications
CO5. Convert from one sampling rate to another. Analyze finite word length effects in digital
filters.
9. Brief note on the course & how it fits into the curriculum
Digital Signal Processing (DSP) is concerned with the representation, transformation and
manipulation of signals on a computer. After half a century advances, DSP has become an
important field, and has penetrated a wide range of application systems, such as consumer
electronics, digital communications, medical imaging and so on. With the dramatic increase of the
processing capability of signal processing microprocessors, it is the expectation that the
importance and role of DSP is to accelerate and expand.
Discrete-Time Signal Processing is a general term including DSP as a special case. This course
will introduce the basic concepts and techniques for processing discrete-time signal. By the end of
this course, the students should be able to understand the most important principles in DSP. The
course emphasizes understanding and implementations of theoretical concepts, methods and
algorithms.
The following answers will give an idea of how it fits in to the curriculum.
This course strengthens analysis and design of digital filter capabilities of the students.
ii.How is the course unique or different from other courses of the Program?
iii.What essential knowledge or skills should they gain from this experience?
Students acquire design, analysis and simulation capabilities with this course
iv.What knowledge or skills from this course will students need to have mastered to perform
well in future classes or later (Higher Education / Jobs)?
In order to design, simulate and develop the LSI(Linear Shift Invariant)system, this course
is essential.
vii.When students complete this course, what do they need know or be able to do?
Able to design, analyze, simulate, compare and evaluate the digital circuits.
viii.Is there specific knowledge that the students will need to know in the future?
In addition to the concepts of z-Transforms, DFT, FFT and the concept of finite word
length effects are needed for future courses.
ix.Are there certain practical or professional skills that students will need to apply in the future?
YES
x. Five years from now, what do you hope students will remember from this course?
The concepts of different types of Digital filters (FIR and IIR) and importance of Sampling
rate conversion.
After completion of this course, the student can able design any Digital Filters as per the
specifications.
xiv. What unique contributions to students’ learning experience does this course make?
It helps in executing mini and major projects having digital circuits and DSP processors.
xv. What is the value of taking this course? How exactly does it enrich the program? The
“Course Purpose” describes how the course fits into the student's educational experience
and curriculum in the program and how it helps in his/her professional career.
This course plays a vital role in design and development of Electronic and digital
Communication system useful to the society and this course also helps for the student’s
professional career growth.
10. Prerequisite, if any
1) Students can understand the concept of discrete time signals & sequences.
2) Analyze and implement digital signal processing systems in time domain.
3) They can solve linear constant coefficient difference equations.
4) They can understand Frequency domain representation of discrete time signals and systems.
5) They can understand the practical purpose of stability and causality.
6) To determine stability, causality for a given impulse response.
7) Understand how analog signals are represented by their discrete-time samples, and in what
ways digital filtering is equivalent to analog filtering.
8) The basics of Z-transforms and its applications are studied.
9) Ability to understand discrete time domain and frequency domain representation of signals
and systems using DFS and DTFT.
10) Calculate the response of applying a given input signal to a system described by a linear
constant coefficient differential equation.
1) Ability to understand discrete time domain and frequency domain representation of signals
1: Slight (Low) 2: Moderate (Medium) 3: Substantial (High) if there is no correlation put “__”
CO# PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
PSO1 PSO2
18EC3201.1 3 3 2 - - - - - - - 1 1
3 1
18EC3201.2 3 3 2 - - - - - - - 1 1
3 2
18EC3201.3 3 3 2 - - - - - - - 1 1
3 1
18EC3201.4 3 3 3 - - - - - - - 1 1 3 1
18EC3201.5 3 3 2 - - - - - - - 1 1
3 2
18EC3201.6 3 3 2 - - - - - - - 1 1
3 2
- - - - - - - 1 1
3 3 2.166 3 1.5
1: Slight (Low) 2: Moderate (Medium) 3: Substantial (High) if there is no correlation put “__”
II. To train students with problem solving capabilities such as analysis and design
with adequate practical skills wherein they demonstrate creativity and innovation that
would enable them to develop state of the art equipment and technologies of ✓
multidisciplinary nature for societal development.
E-mail :
Completion of Instruction:
Periods 2 1 1 1
Books / Material
GUIDELINES
Distribution of periods:
No. of classes required to cover GCET syllabus : 60
No of classes required to solve University papers :5
Total classes required : 65
14.Assignment Questions
Unit 1
i. T(x[n]) =
4) Determine the zero -input response of the system described by the second order difference
equation x(n)-3y(n-1)-4y(n-2)=0
5) Compute the convolution of the following signals x(n)=an u(n), h(n)=bn u(n) when
a ≠b and a=b
7) Obtain the Direct form-I realization for the system described by the difference equation
y(n)=0.5y(n-1)-0.25y(n-2)+x(n)+0.4x(n-1)
8. a) Prove the Initial value theorem and final value theorem of Z Transforms.
b)The discrete time system is represented by the following difference equations in which x(n) is
14.(a) Discuss impulse invariance method of deriving IIR digital filter from corre-sponding analog
filter.
(b) Use the Bilinear transformation to convert the analog filter with system func- tion
H (S) = S + 0.1/(S + 0.1)2 + 9 into a digital IIR filters. Select T = 0.1 and compare
the location of the zeros in H(Z) with the locations of the zeros obtained by applying
b)Discuss Direct form, Cascade and Linear phase realization structures of FIR filters.
17. Discuss and draw various IIR realization structures like Direct form – I, Direct form-II,
18. What are the various basic building blocks in realization of Digital Systems and hence discuss
(b) In the above Question how many non - trivial multiplications are Required.
UNIT-02
1) List the properties of the DFT and prove the following properties:
4) Compute 8-point for the following sequences using DIT- FFT algorithm.
0 otherwise 0 otherwise
5) Find the IDFT of sequence X(k)={4 1-j2.414 0 1-j.414 0 1+j.414 0 1+j2.414} using DIF algorithm.
6) Draw the signal flow graph for 16-point DFT using a) DIT algorithm b) DIF algorithm.
7) Compute 8-point DFT of the sequence x(n)={1/2,1/2,1/2,1/2} using the in- place radix-2
b)Find the IDFT of the given sequence x(K) = {2, 2-3j, 2+3j, -2}.
9) Define Convolution. Compare Linear and Circular Convolution techniques. b) Find the
10) a)Design a high pass filter using hamming window with a cut-off frequency 1.2 radians/second and
N=9
b) Find the IDFT of the given sequence x(K) = {2, 2-3j, 2+3j, -2}.
12) a) Define DFT and IDFT. State any Four properties of DFT.
b)Find 8-Point DFT of the given time domain sequence x(n) = {1, 2, 3, 4}.
14) a)Develop DIT-FFT algorithm and draw signal flow graphs for decomposing the
15). a) For each of the following systems, determine whether or not the system is
A. T [x(n)] = x(n − n0 )
B. T [x(n)] = ex(n)
(b) A system is described by the difference equation y(n)-y(n-1)-y(n-2) = x(n-1). Assuming that
the system is initially relaxed, determine its unit sample response h(n).
16) .a)Derive the expressions for computing the FFT using DIT algorithm and hence draw the
UNIT-03
1).Obtain the Direct form-I, Direct form-II,Cascade Form and Parallel Form realization for the
3) List out the steps involved in designing an analog Butterworth low pass filter.
And determine Ha (S) and hence obtain H(Z) using Bilinear Transformation method.
6) A 4-th order Butterworth filter has cut off frequency c 200 rad/sec.
1dB ripple in the pass band and 40dB attenuation in the stopband?
7). Given the specification αp=3dB, αs=16dB, fp=1KHz, fs=2KHz. Determine the
9). Determine the order of low pass Butterworth filter that has a 3 dB at 500Hz and an
UNIT-04
1) Determine the coefficients of a linear phase FIR filter of length M=15 has a asymmetric unit sample
=0 k = 4,5,6,7
5).Find the value of h(n) for N=11, find H(Z) . Plot the magnitude response.
0 otherwise
Find the value of h(n) for N = 11. Find H(Z), plot the magnitude response.
7) Design an ideal differentiate H(ejw) = j ω -π≤ ω≤π
Using a) rectangular window b)Hamming window with N=8.plot frequency response in both cases.
8) using frequency sampling method design a bandpass filter with following specifications
Sampling frequency F=8000Hz Cut off frequency fc1=1000Hz fc2=3000Hz Determine the filter
b) Design an FIR Digital Low pass filter using Hanning window whose cut 1ff freq is
b)Design an FIR Digital High pass filter using Hamming window whose cut off
b)Design an FIR Digital Band pass filter using rectangular window whose
upper and lower cut off freq.’s are 1 & 2 rad/s and length of window N = 9.
b)Design an FIR Digital Low pass filter using rectangular window whose cut off
2) For the sequence x(n)={ 5,6,8,4, 2,1,3,12,10,7,11} find the output sequence y(z) which is
b) Discuss the sampling rate conversion by a factor I/D with the help of a Neat block Diagram.
b) Discuss the sampling rate conversion by a factor I/D with the help of aNeat block Diagram.
8.a) Define Interpolation and Decimation. List out the advantages of Sampling
rate conversion.
b)Discuss the sampling rate conversion by a factor I with the help of a Neat
block Diagram.
b) Discuss the process of n Decimation by a factor D and explain how the aliasing
y(n)=x(n)+0.81x(n-1)-0.81x(n-2)-0.45y(n-2).
Determine the transfer function of the system. Sketch the poles and zeroes on the Z-
plane.
11 (a) Compute Discrete Fourier transform of the following finite length sequence
considered to be of length N.
1) Determine the response y(n), n>0 of the system described by the second order
difference equation y(n) -5y(n-1) +6y(n-2) = x(n), for x(n) = n
x(n)=2nu(n).
4). Obtain the i) Direct forms ii) cascade iii) parallel form realizations for the
5). Use the one-sided Z-transform to determine y(n) n ≥ 0 in the following cases.
iii) Understand the input x(n) = {1 1. . . .} and compute the first 10 samples of the output.
iv) Compute the first 10 samples of the output for the input given in part (c) by
Tutorial -2
1. Calculate the 4 – point IDFT of X (K) = [1, -1, 2, -2] using DIT FFT algorithm. Compare
the number of calculations required to find DFT of a sequence using Radix -2 FFT
Algorithms and using DFT formula.
2. The DTFT of a real signal x(n) is X(F). How is the DTFT of the following?
Signals related to X(F). (a) y(n)=x(-n) (b) r(n)=x(n/4) (c) h(n) =jnx(n)
Find the IDFT of sequence X(k)={4 1-j2.414 0 1-j.414 0 1+j.414 0 1+j2.414} using DIF
Algorithm.
3.Draw the signal flow graph for 16-point DFT using a) DIT algorithm b) DIF algorithm.
Given x(n)=2n and N=8 find X(k) using DIT-FFT algorithm.
Tutorial -4
1. Design an ideal low pass filter with frequency response
Hd(ejw) = 1 for –π/2 ≤ w ≤ π/2
0 for π/2 ≤ | w | ≤ π
2. Find the value of h(n) for N=11, find H(Z) . Plot the magnitude response.
3. Design an ideal Hilbert transformer having frequency response H (jώ) = j
Tutorial -5
1. Understand the decimation process with a neat block diagram.
2. Consider a signal x(n)=sin(∏n)U(n). Obtain a signal with an interpolation factor of ‘2’.
3. Why multirate digital signal processing is needed?
4. Design a two state decimator for the following specifications. Decimation factor = 50
Pass band = 0<f<50 Transitive band = 50≤f≤ 55 Input sampling = 10 KHz
Ripple = δ1=0.1, δ2=0.001.
5. Design a linear pahse FIR filter that satisfies the following specifications based on a single-
stage and two-stage multirate structure.
1. What is meant by FIR filter? What are the advantages of FIR filter?
2. List the design techniques for FIR filter design?
3. What is Gibbs phenomenon?
4. Under what condition, the finite duration sequence h(n) will yield constant group delay in
the frequency response characteristics and not the phase delay?
5. What are the desirable characteristics of windows?
6. Compare Hamming window with Kaiser window.
7. Draw impulse response of ideal low pass filter?
8. What is the principle of designing FIR filter using frequency sampling method?
9. What is the necessary and sufficient condition for linear phase characteristics in FIR
filter?
10. Explain the procedure for designing FIR filters using windows?
UNIT-5
Unit 4
1.The ___________is due to nonlinear phase characteristics of the filter.
2.In FIR filters ______________function is a linear function of ω
3.In rectangular window the width of main-lobe is equal to___________
4.The _________ window spectrum has the highest attenuation for side lobes.
5.The width of the main-lobe in window spectrum can be reduced by increasing the length
of___________.
Multiple choice Questions:
1.1.The frequency response of a digital filter is periodic in the range
a).0<ω<2π b).-π<ω<π c).0<ω<π d). 0<ω<2π or -π<ω<π
2.If ωc is the cutoff frequency of highpass filter,then the response lies only in the range of,
a)-ωc<ω<ωc b). a)-ωc<ω<π a)-π<ω<ωc a)-ωc<ω<π
3.Raised cosine windows also called generalized
a).Hamming window b).Hanning Window c).Rectangular window d).Blackman
window
4.Symmetric impulse response having odd numbe of samples,N=7 with centre of symmetry α is
equal to
a).2 b).5 c).3.5 d).3
5.The Symmetric impulse response having even number of samples cannotbe used to design,
a).Lowpass filter b).Band pass filter c).High pass filter d).Band stop filter.
3.If x(n) and y(n ) are input and output of a interpolator with sampling rate conversion factor B,then,
a).y(n)=x(Bn) b).y(n)=x(n/B) c).y(n)=x(n)/B d).y(n)=Bx(n)
4.To eliminate multiple images at the output, during interpolation by I,the output is filtered to have
a bandwidth of,
a).∏I b). ∏I c).I/∏ d).∏/I2
5.If A and B are integer sampling rate conversion factor fr decimation and interpolation
respectively,then sampling rate convertion factor for conversion by rational factor is,
a)A/B b)B/A c)A2/B d).B/A2
17. Detailed notes (some material is presented here and remaining material is
provided in pdf format)
What is DSP?
DSP, or Digital Signal Processing, as the term suggests, is the processing of signals by digital
means. A signal in this context can mean a number of different things. Historically the origins of
signal processing are in electrical engineering, and a signal here means an electrical signal carried
by a wire or telephone line, or perhaps by a radio wave. More generally, however, a signal is a
stream of information representing anything from stock prices to data from a remote-sensing
satellite.
In many cases, the signal is initially in the form of an analog electrical voltage or current, produced
for example by a microphone or some other type of transducer. In some situations the data is
already in digital form - such as the output from the readout system of a CD (compact disc) player.
An analog signal must be converted into digital (i.e. numerical) form before DSP techniques can
be applied. An analog electrical voltage signal, for example, can be digitized using an integrated
electronic circuit (IC) device called an analog-to-digital converter or ADC. This generates a digital
output in the form of a binary number whose value represents the electrical voltage input to the
device.
Signal processing
Signals commonly need to be processed in a variety of ways. For example, the output signal from
a transducer may well be contaminated with unwanted electrical "noise". The electrodes attached
to a patient's chest when an ECG is taken measure tiny electrical voltage changes due to the activity
of the heart and other muscles. The signal is often strongly affected by "mains pickup" due to
electrical interference from the mains supply. Processing the signal using a filter circuit can
remove or at least reduce the unwanted part of the signal. Increasingly nowadays the filtering of
signals to improve signal quality or to extract important information is done by DSP techniques
rather than by analog electronics.
Development of DSP
The development of digital signal processing dates from the 1960's with the use of mainframe
digital computers for number-crunching applications such as the Fast Fourier Transform (FFT),
which allows the frequency spectrum of a signal to be computed rapidly. These techniques were
not widely used at that time, because suitable computing equipment was available only in
universities and other scientific research institutions.
The introduction of the microprocessor in the late 1970's and early 1980's made it possible for
DSP techniques to be used in a much wider range of applications. However, general-purpose
microprocessors such as the Intel x86 family are not ideally suited to the numerically-intensive
requirements of DSP, and during the 1980's the increasing importance of DSP led several major
electronics manufacturers (such as Texas Instruments, Analog Devices and Motorola) to develop
Digital Signal Processor chips - specialized microprocessors with architectures designed
specifically for the types of operations required in digital signal processing. (Note that the acronym
DSP can variously mean Digital Signal Processing, the term used for a wide range of techniques
for processing signals digitally, or Digital Signal Processor, a specialized type of microprocessor
chip). Like a general-purpose microprocessor, a DSP is a programmable device, with its own
native instruction code. DSP chips are capable of carrying out millions of floating point operations
per second, and like their better-known general-purpose cousins, faster and more powerful
versions are continually being introduced.
Applications of DSP
Although the mathematical theory underlying DSP techniques such as Fast Fourier and Hilbert
Transforms, digital filter design and signal compression can be fairly complex, the numerical
operations required to implement these techniques are in fact very simple, consisting mainly of
operations that could be done on a cheap four-function calculator. The architecture of a DSP chip
is designed to carry out such operations incredibly fast, processing up to tens of millions of samples
per second, to provide real-time performance: that is, the ability to process a signal "live" as it is
sampled and then output the processed signal, for example to a loudspeaker or video display. All
of the practical examples of DSP applications mentioned earlier, such as hard disc drives and
mobile phones, demand real-time operation.
The major electronics manufacturers have invested heavily in DSP technology. Because they now
find application in mass-market products, DSP chips account for a substantial proportion of the
world market for electronic devices. Sales amount to billions of dollars annually, and seem likely
to continue to increase rapidly.
The analog signal - a continuous variable defined with infinite precision - is converted to a discrete
sequence of measured values which are represented digitally.
Only after it has been held can the signal be measured, and the measurement converted to a digital
value.
Note that the sampling takes place after the hold. This means that we can sometimes use a slower
Analogue to Digital Converter (ADC) than might seem required at first sight. The hold circuit
must act fast - fast enough that the signal is not changing during the time the circuit is acquiring
the signal value - but the ADC has all the time that the signal is held to make its conversion.
Sometimes we may have some a priori knowledge of the signal, or be able to make some
assumptions that will let us reconstruct the lost information.
1. Introduction
x(t )e
− j 2ft
Can we find X ( f ) = dt
−
N −1
k
for the line spectrum at frequency k = (2 )
T
1
limited frequency resolution 2
T
0 2N / T
N −1
1
4. xn =
N
X k e j 2kn / N ---- periodic function (period N)
k =0
k
5. X k ( k = 2 line spectrum)
T
N −1
Xk = xn e − j 2kn / N period function (period N)
n=0
1. Preparations
(1) Ideal sampling waveform y s (t ) = (t − mTs ) :
m = −
t
when t0 = 0 ( ) T sin c(Tf )
T
(3) x1 (t ) * x2 (t ) X1 ( f ) X 2 ( f )
(4) x1 (t ) x2 (t ) X1 ( f ) * X 2 ( f )
|t |
−
x(t ) = e
|t |
−
X s ( f ) = Ys ( f ) * F (e
) = Ys ( f ) * X ( f )
1
X s ( f ) = fs X ( f − nf
n = −
s ) = 2f s 1 + (2 ( f − nf
n = − )) 2
s
(2) Limited T (over which x(t) is sampled to collect data for DFT)
t
window ( )
T
t
xsw (t ) = xs (t ) ( )
T
X sw ( f ) = (2f s {1 + 2 ( f − nf
n = −
s ) 2 }−1 ) * (T sin c(Tf ))
Effect of limited T
DFT as an estimate for X(f): even worse than X s ( f ) due to the limited frequency
resolution.
Caused by sampling
t t
X s ( f ) = X s ( f ) * F ( ( )) ( f ) = F ( ( ))
T T
X s ( f ) : contribution of X s ( f + f ) to X s ( f ) : determined by
weight ( f )
1. DFT Algorithm
n =0 n =0
Denote WN = e − j 2 / N , then
N −1
X (k ) = x(n)WN nk
n=0
Properties of WN m :
(1) WN 0 = (e − j 2 / N ) 0 = e0 = 1, WN N = e − j 2 = 1
N +m
= WN
m
(2) WN
WN N + m = (e − j 2 / N ) N + m
= (e − j 2 / N ) N (e − j 2 / N ) m
= 1 (e − j 2 / N ) m = WN m
(3) WN
N /2
= e − j 2 /( N / 2) / N = e − j = −1
WN N / 4 = e − j 2 /( N / 4) / N = e − j / 2 = − j
= e− j 2 /(3 N / 4) / N = e− j 3 / 2 = j
3N / 4
WN
2. Examples
Example 10-3: Two-Point DFT
1
x(0), x(1): X (k ) = x(n)W2 nk k = 0,1
n=0
1 1
X (0) = x(n)W2 n0 = x(n) = x(0) + x(1)
n=0 n=0
n =0 n =0
= x(0)W2 + x(1)W2
0 1
= x(0) + x(1)W2
(1 / 2 ) 2
3
X (k ) = x(n)W4 nk k = 0,1,2,3,
n=0
3 3
X (0) = x(n)W4 n0
= x(n) = x(0) + x(1) + x(2) + x(3)
n=0 n=0
3
X (1) = x(n)W4 = x(0)W4 + x(1)W4 + x(2)W4 + x(3)W4
n 0 1 2 3
n =0
n =0
n =0
If we denote z(0) = x(0), z(1) = x(2) => Z(0) = z(0) + z(1) = x(0) + x(2)
v(0) = x(1), v(1) = x(3) => V(0) = v(0) + v(1) = x(1) + x(3)
N −1
X (k ) = x(n)WN
kn
n =0
N / 2 −1 N / 2 −1
g (r )W h(r )W
k ( 2 r +1)
= + (k = 0,1,..., N − 1)
k (2r )
N N
r =0 r =0
N / 2 −1 N / 2 −1
= g (r )W + WN h(r )W
2 kr k 2 kr
N N
r =0 r =0
= (e − j 2 . / N ) 2 kr = (e − j 2 /(.N / 2) ) kr = W N
2 kr kr
WN
2
N / 2 −1 N / 2 −1
X (k ) = g (r )W + WN h(r )W
kr k kr
N /2 N /2
r =0 r =0
= G (k ) + W N H (k )
k
( G(k): N/2 point DFT output (even indexed), H(k) : N/2 point DFT output (odd
indexed))
X ( k ) = G ( k ) + WN k H ( k ) k = 0,1,..., N − 1
N / 2 −1 N / 2 −1
G (k ) = g ( r )WN / 2 = kr
x ( 2r )WN / 2 kr
r =0 r =0
N / 2 −1 N / 2 −1
H (k ) = h( r )WN / 2 kr = x ( 2r + 1)WN / 2 kr
r =0 r =0
Future Decimation
g(0), g(1), …, g(N/2-1) G(k)
WN / 2 k = WN 2k ?
WN / 2 k = ( e − j 2 /( N / 2 ) ) k
= ( e − j 2 2 / N ) k = ( e − j 2 / N ) 2 k
= WN 2 k
Similarly,
H (k ) = HE (k ) + WN 2k Ho(k )
even indexed odd indexed
For 8 – point
n =0
N / 2 −1 N −1
= x(n)W + x(n)W
nk nk
N N
n =0 n= N / 2
N / 2 −1 N / 2 −1
X (k ) = x ( n )WN nk + x ( N / 2 + m )WN ( N / 2 + m ) k
n =0 m =0
N / 2 −1 N / 2 −1 N
= x ( n )WN nk
+ x ( N / 2 + m )WN mk
WN 2 k
n =0 m =0
N N
WN 2 = −1 WN 2 k = ( −1) k
N / 2 −1 N / 2 −1
X (k ) = x ( n )WN nk + ( −1) k x ( N / 2 + m )WN mk
n =0 m =0
N / 2 −1
= [ x ( n ) + ( −1) k x ( N / 2 + n )]WN nk
n =0
N / 2 −1
k : even (k = 2r ) X (k ) = X (2r ) = [ x(n) + x( N / 2 + n)]W
2 rn
N
n =0
− j 2 / N
= (e ) 2 rn = (e − j 2 /( N / 2) ) rn = W N / 2
2 rn rn
WN
N / 2 −1
X ( k ) = X ( 2r ) = [
x(n) + x( N / 2 + n)]W
rn
N/2 point DFT
n =0
N /2
y (n)
N / 2 −1
Y (r) = y ( n )WN / 2 rn Z (r )
n =0
[ x(n) − x( N / 2 + n)]W
n ( 2 r +1)
= N
n =0
N / 2 −1
= [x(
n) − x( N / 2 + n)]W W
n 2 rn
n =0
N N
z (n)
N / 2 −1
= z (n)W
2 rn
N
n =0
N / 2 −1
= z (n)W
rn
N /2
n =0
N / 2 −1
N N
Z (r ) = z (n)W − 1)
rn
N /2 po int DFT of z (0), , z (
n =0 2 2
N / 2 −1
Y (k ) = y ( n )WN / 2 rn
n =0
N / 4 −1
= [ y ( n ) + ( −1) k y ( N / 4 + n )]WN / 2 nk
n =0
k : odd k = 2r + 1
N / 4 −1
Y ( k ) = Y ( 2r + 1) = [ y ( n ) − x ( N / 4 + n )]WN / 2 n WN / 2 2 rn
n =0
y 2( n )
N / 4 −1
Y 2( r ) = y 2( n )WN / 4 rn N / 4 po int DFT
n =0
Computation ration
Assumptions
(1)
x ( n ) and y ( n ) (n = 0,..., N − 1)
X (k ) Y (k ) (k = 0,..., N − 1)
(2) A, B: arbitrary constants
(3) Subscript e:
Subscript o:
N −1
2 if N even
xo(n) : odd about
N if N odd
2
N = 10, xe(n)
x ( 4) = x (5)
x (3) = x (6)
N −1
= 4.5 x ( 2) = x (7)
2 x (1) = x (8)
x (0) = x (9)
N = 9, xe(n)
x ( 4) = x (5)
x (3) = x (6)
N
= 4 .5 x ( 2 ) = x ( 7 )
2 x (1) = x (8)
x (0) = x (9)
(4) Any real sequence can be expressed in terms of its even and odd parts according to
Consider n = 2
(6) x(n) X (k )
left right side:
side: DFT
Properties
1. Linearity : Ax (n ) + By (n ) AX (k ) + BX (k )
3. Frequency Shift:
x(n)e j 2km / N X (k − m)
N −1
DFT ( X ( n )) = X (n )e − j 2nk / N
n =0
DFT of x(m)
N −1
1
DFT ( N X ( n )) =
N
−1
X (n)e − j 2nk / N = x( −k )
n =0
5. Circular convolution
N −1
x(m) y(n − m) = x(n)y(n) X (k )Y (k ) circular convolution
m=0
6. Multiplication
N −1
x ( n) y ( n) N
−1
X (m)Y (k − m) = N −1 X (k )Y (k )
new sequence m=0
z (n) = x(n) y (n)
7. Parseval’s Theorem
N −1 N −1
| x(n) |2 = N −1 | X (k ) |2
n =0 k =0
xor (n) jX oi (k )
(the DFT of an odd real sequence is odd and imaginary )
Example -7
Z ( 0) = 0
Z (1) = 2 − j 2
Z ( 2) = 0
Z (3) = 2 + j 2
Example -8
DFT of x ( n) = ( n) :
N −1
X (k ) = (n)W N = 1 k = 0,1,..., N − 1
nk
n =0
Time-shift property
N −1
DFT [ x(n − n0 )] = (n − n0 )W N
nk
n =0
= WN = e − j 2kn0 / N
n0 k
x1 (n) = 1 x2 (n) = 1 0 n N − 1
N −1
6 Applications of FFT
1. Filtering
x(0), …, x(N-1) FFT (DFT) =>
2kt 2k 2 T
X(k): Line spectrum at k = = (1 = t = )
T N T N
(Over T: x(0), …, x(N-1) are sampled.)
Inverse DFT:
N −1
x(n) = X (k )e j 2nk / N
k =0
^ N0
x(n) = X (k )e j 2nk / N
k =0
x(n) = cos n + cos n 0n7
4
2
2 2 2 4
1 = ( ) 2 =( )2 = =
8 N N 8 2
How to filter frequency higher than ?
4
2. Spectrum Analyzers
Analog oscilloscopes => time-domain display
N −1
E= | x ( n) |2
n=0
Parseval’s Theorem
N −1
| x ( k ) |2
E=
k =0 N
3.1. Introduction:
N M
i=0 k=1
which is of order max{ N,M }, and is recursive if any of the b j coefficients are non-zero. A
second order recursive digital filter therefore has the difference equation:
A digital filter with a recursive linear difference equation can have an infinite impulse-response.
Remember that the frequency-response of a digital filter with impulse-response {h[n]} is:
H(e j ) = h[n]e - j n
n=-
Consider the response of a causal stable LTI digital filter to the special sequence {z n } where z is
a complex. If {h[n]} is the impulse-response, by discrete time convolution, the output is a
sequence {y[n]} where
k=- k=-
k=-
The expression obtained for H(z) is the ‘z-transform’ of the impulse-response. H(z) is a complex
number when evaluated for a given complex value of z.
It may be shown that for a causal stable system, H(z) must be finite when evaluated for a complex
number z with modulus greater than or equal to one.
Since H ( z ) = h[n]z − n
n = −
and the frequency - response : H (ei ) = h[n]z
n = −
− jn
j j
it is clear that replacing z by e in H(z) gives H(e ) .
It is useful to represent complex numbers on an ‘Argand diagram’ as illustrated below. The main
reason for doing this is that the modulus of the difference between two complex numbers a+jb and
c+jd say i.e. | (a+jb) - (c+jd) |is represented graphically as the length of the line between the two
complex numbers as plotted on the Argand diagram.
Imaginary part
Rej
-3+3j
R
Real part
1-2j
If one of these complex numbers, c +jd say is zero i.e. 0+j0, then the modulus of the other number
|a+jb| is the distance of a+jb from the origin 0+j0 on the Argand diagram.
Of course, any complex number, a+jb say, can be converted to polar form Rej where R= |a+jb|
and = tan-1(b/a). Plotting a complex number expressed as Rej on an Argand diagram is also
illustrated above. We draw an arrow of length R starting from the origin and set at an angle from
the ‘real part’ axis (measured anti-clockwise). Rej is then at the tip of the arrow. In the illustration
above, is about /4 or 45 degrees. If R=1, Rej = ej and on the Argand diagram would be a point
at a distance 1 from the origin. Plotting ej for values in the range 0 to 2 (360O) produces points
all of which lie on a ‘unit circle’ , i.e. a circle of radius 1, with centre at the origin.
Where the complex numbers plotted on an Argand diagram are values of z for which we are
interested in H(z), the diagram is referred to as ‘the z-plane’. Points with z = e j lie on a unit
circle, as shown in Fig 5.1. Remember that |e j | = |cos() +jsin()| = [cos2() + sin2()] =
1. Therefore evaluating the frequency-response H(ei) for in the range 0 to is equivalent to
Example 3.1: Find H(z) for the difference equation: y[n] = x[n] + x[n-1]
Example 3.2:
Find H(z) for the recursive difference equation: y[n] = a 0 x[n] + a 1 x[n-1] - b 1 y[n-1]
Solution:
The method used in Example 5.1 is not so easy because the impulse-response can now be infinite.
Fortunately there is another way. Remember that if x[n] = z n then y[n] = H(z) z n , y[n-1] =
H(z) z n - 1 etc. Substitute into the difference equation to obtain:
H(z) z n = a 0 z n + a 1 z n - 1 - b 1 H(z) z n - 1
1 + b1 z-1
By the same method, H(z) for a general digital filter whose difference-equation was given earlier
is:
a 0 + a 1 z - 1 + a 2 z - 2 + ... + a N z - N
b 0 + b 1 z - 1 + b 2 z - 2 + ... + b M z - M
Given H(z) in this form, we can easily go back to its difference-equation and hence its signal-
flow graph, as illustrated by the following example.
Example 3.3: Give a signal flow graph for the second order digital filter with:
a 0 + a 1 z -1 + a 2 z - 2
H(z) = ⎯⎯⎯⎯⎯⎯⎯⎯⎯
1 + b1 z -1 + b2 z-2
The signal-flow graph in Fig 5.2 is readily deduced from this difference-equation. It is referred to
as a second order or ‘bi-quadratic’ IIR section in ‘direct form 1’.
z-1 z-1
a1
-b1
z-1 z-1
a2 -b2
Alternative signal flow graphs can generally be found for a given difference-equation.
Considering again the ‘Direct Form I’ bi-quadratic section in Fig 5.2, re-ordering the two halves
in this signal flow graph gives Fig 5.3 which, by Problem 5.9, will have the same impulse-response
as the signal-flow graph in fig 5.2. Now observe that Fig 5.3 may be simplified to the signal-flow
graph in Fig 5.4 which is known as a ‘Direct Form II’ implementation of a bi-quadratic section.
It has the minimum possible number of delay boxes and is said to be ‘canonical’. Its system
function is identical to that of the ‘Direct Form I’ signal-flow graph, and therefore it can implement
any second order bi-quadratic system function.
x[n] a0 y[n]
z-1 z-1
a1
-b1
z-1 z-1
-b2 a2
z-1
-b1 a1
W1
z-1
-b2 a2
W2
Example 3.4:
Given values for a 1 , a 2 ,a 0 , b 1 and b 2 , write a program to implement Direct Form II.
3.5. System function: The expression obtained for H(z) is a ratio of polynomials in z − 1 . H(z)
is the ‘system function’. When z < 1, H(z) need not be finite.
The expression above for H(z) for a general digital filter may be re-expressed as:
(a0z N + a 1 z N -1 + ... + a N )
(z M + b 1 z N-1 + ... + b M )
The denominator and numerator polynomials may now be expressed in factorised form to obtain:
(z - p 1 )( z - p 2 )(z - p 3 )...(z - p M )
The roots of the numerator: z 1 , z 2 ,..., z N , are called the ‘zeros’ of H(z).
The roots of the denominator: p 1 ,p 2 ,..., p M , are called the ‘poles’ of H(z)
H(z) will be infinite when evaluated with z equal to a pole, and will become zero with z equal to
a zero except in the special case where the zero coincides exactly with one of the poles.
For a causal stable system, H(z) must be finite for z 1. Therefore there cannot be a pole
whose modulus is greater than or equal to 1. All poles must satisfy z < 1, and when plotted
on the Argand diagram, this means that they must lie inside the unit circle. There is no restriction
on the positions of zeros.
Assume we wish to design a 4th order 'notch' digital filter to eliminate an unwanted sinusoid
at 800 Hz without severely affecting rest of signal. The sampling rate is FS = 10 kHz.
FS=10000;
FL = 800 – 25 ; FU = 800+25;
freqz(a, b);
The frequency-responses (gain and phase) produced by the final two MATLAB statements are as
follows:
-20
-40
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Frequency (Hz)
0
Phase (degrees)
-100
-200
-300
-400
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Frequency (Hz)
Since the Butterworth band-stop filter will have -3dB gain at the two cut-off frequencies
FL = 800-25 and FU=800+25, the notch has ‘-3 dB frequency bandwidth’: 25 + 25 = 50 Hz.
Now consider how to implement the 4th order digital filter. The MATLAB function gave us:
This implementation works fine in MATLAB. But ‘direct form’ IIR implementations of order
greater than two are rarely used. Sensitivity to round-off error in coefficient values will be high.
Also the range of ‘intermediate’ signals in the z-1 boxes will be high.
High word-length floating point arithmetic hides this problem, but in fixed point arithmetic, great
difficulty occurs. Instead we use ‘cascaded bi-quad sections’
Given a 4th order transfer function H(z). Instead of the direct form realization below:
To convert the 4th order transfer function H(z) to this new form is definitely a job for MATLAB.
Do it as follows after getting a & b for the 4th order transfer function, H(z), as before:
[SOS G] = tf2sos(a,b)
G = 0.978
In MATLAB, ‘SOS’ stands for ‘second order section’ (i.e. bi-quad) and the function ‘tf2SOS’
converts the coefficients in arrays ‘a’ and ‘b’ to the new set of coefficients stored in array ‘SOS’
and the constant G. The array SOS has two rows: one row for the first bi-quad section and one
row for the second bi-quad section. In each row, the first three terms specify the non-recursive
x[n] order IIR notch filter realised as two biquad (SOS) sections
Fourth
This is now a practical and realizable IIR digital ‘notch’ filter, though we sometimes implement
the single multiplier G =0.918 by two multipliers, one for each bi-quad section. More about this
later.
How good is a notch filter? We can start to answer this question by specifying the filter's 3dB
bandwidth i.e. the difference between the frequencies where the gain crosses 0.707 (-3dB ). We
should also ask what is the gain at the notch frequency (800 Hz in previous example); i.e. what is
the ‘depth’ of the notch. If it is not deep enough either (i) increase the -3 dB bandwidth or (ii)
increase the order. Do both if necessary. To ‘sharpen’ the notch, decrease the -3dB bandwidth,
but this will make the notch less deep; so it may be necessary to increase the order to maintain a
deep enough notch. This is an ‘ad-hoc’ approach – we can surely develop some theory later. It
modifies the more formal approach, based on poles and zeroes, adopted last year.
Solution:
[a b]=butter(2,[FL,FU]/(FS/2), ‘stop’);
[SOS G] = tf2sos(a,b)
Many design techniques for IIR discrete time filters have adopted ideas and terminology developed
for analogue filters, and are implemented by transforming the system function, H a(s), of an
analogue ‘prototype’ filter into the system function H(z) of a digital filter with similar, but not
identical, characteristics.
For analogue filters, there is a wide variety of techniques for deriving H a(s) to have a specified
type of gain-response. For example, it is possible to deriving H a(s) for an n th order analogue
Butterworth low-pass filter, with gain response:
1
Ga ( ) =
1 + ( / ) 2 n
C
DEPARTMENT OF Electronics and Communication Engineering
It is then possible to transform Ha(s) into H(z) for an equivalent digital filter. There are many ways
of doing this, the most famous being the ‘bilinear transformation’. It is not the only possible
transformation, but a very useful and reliable one.
The bilinear transformation involves replacing s by (2/T) (z-1)/(z+1)], but fortunately, MATLAB
takes care of all the detail and we can design a Butterworth low pass filter simply by executing the
MATLAB statement:
[a b] = butter(N, fc)
N is the required order and fc is the required ‘3 dB’ cut-off frequency normalised (as usual with
MATLAB) to fS/2. Analogue Butterworth filters have a gain which is zero in the pass-band and
falls to -3 dB at the cut-off frequency. These two properties are preserved by the bilinear
transformation, though the traditional Butterworth shape is changed. The shape change is caused
by a process referred to as ‘frequency warping’. Although the gain-response of the digital filter is
consequently rather different from that of the analogue Butterworth gain response it is derived
from, the term ‘Butterworth filter’ is still applied to the digital filter. The order of H(z) is equal to
the order of Ha(s)
Frequency warping:
It may be shown that the new gain-response G() = Ga() where = 2 tan(/2). The graph of
against below, shows how in the range - to is mapped to in the range - to . The
mapping is reasonably linear for in the range -2 to 2 (giving in the range -/2 to /2), but as
increases beyond this range, a given increase in produces smaller and smaller increases in .
The effect of frequency warping is well illustrated by considering the analogue gain-response
shown in fig 5.17(a). If this were transformed to the digital filter gain response shown in fig
5.17(b), the latter would become more and more compressed as → .
Radians/sample
2.355
1.57
0.785
Radians/second
0
-12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12
-0.785
-1.57
-2.355
-3.14
Fig 3.16
Fig 6.1: Frequency
Frequency warping
Warping
‘Prototype’ analogue transfer function: Although the shape changes, we would like G() at
its cut off C to the same as Ga() at its cut-off frequency. If Ga() is Butterworth, it is -3dB at
its cut-off frequency. So we would like G() to be -3 dB at its cut-off C.
Fig. if3.17(a):
Achieved Analogue
the analogue Gain is
prototype Response
designed to haveFig. 3.17(b):frequency
its cut-off C = 2 Transformation
Effect ofatBilinear tan(C/2).
Designing the analogue prototype with cut-off frequency 2 tan(C/2) guarantees that the digital
filter will have its cut-off at C.
Design of a 2nd order IIR low-pass digital filter by the bilinear transform method (‘by hand’)
Let the required cut-off frequency C = /4 radians/sample. We need a prototype transfer function
Ha(s) for a 2nd order analogue Butterworth low-pass filter with 3 dB cut-off at C = 2tan(C/2)
1
H a (s) =
1 + ( 2 )s + s 2
When the cut-off frequency is = C rather than = 1, the second order expression for H(s)
becomes:
1
H a ( s) =
1 + 2 ( s / C ) + ( s / C ) 2
Replacing s by j and taking the modulus of this expression gives G() = 1/[1+(/C)2n] with
n=2. This is the 2nd order Butterworth low-pass gain-response approximation. Deriving the above
expression for Ha(s), and corresponding expressions for higher orders, is not part of our syllabus.
It will not be necessary since MATLAB will take care of it.
Setting C = 0.828 in this formula, then replacing s by 2(z-1)/(z+1) gives us H(z) for the required
IIR digital filter. You can check this ‘by hand’, but fortunately MATLAB does all this for us.
Example 3.7
Using MATLAB, design a second order Butterworth-type IIR low-pass filter with c = / 4.
Solution:
[a b] = butter(2, 0.25)
b = [1 -0.94 0.33]
H(z) = ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
1 + 2 z −1 + z −2
H (z ) = 0.098 −1
−2
1 − 0.94 z + 0.33z
which may be realised by the signal flow graph in fig 5.18. Note the saving of two multipliers by
using a multipler to scale the input by 0.098.
Recursive filters of order greater than two are highly sensitive to quantisation error and overflow.
It is normal, therefore, to design higher order IIR filters as cascades ofFig. bi-quadratic
5.18 sections.
MATLAB does not do this directly as demonstrated by Example 5.8.
Example 3.8: Design a 4th order Butterworth-type IIR low-pass digital filter is needed with 3dB
cut-off at one sixteenth of the sampling frequency fS.
Solution: Relative cut-off frequency is /8. The MATLAB command below produces the arrays
a and b with the numerator and denominator coefficients for the 4th order system function H(z).
[a b] = butter(4, 0.125)
This corresponds to the ‘4th order ‘direct form’ signal flow graph shown below.
z-1 0.0009
2.977 39
+
z-1
+
-3.422
+
z-1
+
1.79
+ z-1
-0.356
0.0009
Figure 3.19: A 4th order ‘direct form II’ realisation (not commonly used)
Higher order IIR digital filters are generally not implemented like this. Instead, they are
implemented as cascaded biquad or second order sections (SOS). Fortunately MATLAB can
[a b] = butter(4, 0.125)
[sos G] = tf2sos(a,b)
[a b] = butter(4, 0.125)
[sos G] = tf2sos(a,b)
1 2 1 1 -1.612 0.745 ]
G = 0.00093
This produces a 2-dimensional array ‘sos’ containing two sets of biquad coefficients and a ‘gain’
constant G. A mathematically correct system function based on this data is as follows:
1 + 2 z −1 + z −2 1 + 2 z −1 + z −2
H (z ) = 0.00093 −1
− 2 −1
−2
1 − 1.365 z + 0.478 z 1 − 1.612 z 0.745 z
In practice, especially in fixed point arithmetic, the effect of G is often distributed among the two
sections. Noting that 0.033 x 0.028 0.00093, and noting also that the two sections can be in
either order, an alternative expression for H(z) is as follows:
1 + 2 z −1 + z −2 1 + 2 z −1 + z −2
H (z ) = 0.033 −1
−2
0.028
−1
−2
1 − 1.612 z 0.745 z 1 − 1.365 z + 0.478 z
This alternative expression for H(z) may be realised in the form of cascaded bi-quadratic sections
as shown in fig 5.20.
2 1.36 2
1.6
-0.74 -0.48
Fig. 6.4: Fourth order IIR Butterworth filter with cut-off fs/16
Fig. 3.20 Fourth order IIR Butterworth LP filter with cut-off fs/16
1.1
0.9
0.5
0.3
Radians/second
0.1
1/2
0.4
1
G ( ) =
(1 + 8 )
(with cut-off frequency normalised to 1) as used by MATLAB as a prototype. Fig 5.21(b) shows
the gain-response of the derived digital filter which, like the analogue filter, is 1 at zero frequency
and 0.707 (-3dB) at the cut-off frequency (/8 0.39 radians/sample). Note however that the
analogue gain approaches 0 as → whereas the gain of the digital filter becomes exactly zero
at = . The shape of the Butterworth gain response is ‘warped’ by the bilinear transformation.
However, the 3dB point occurs exactly at c for the digital filter, and the cut-off rate becomes
sharper and sharper as → because of the compression as → .
The bilinear transformation may be applied to analogue system functions which are high-pass,
band-pass or band-stop to obtain digital filter equivalents. For example a ‘high-pass’ digital filter
may be designed as illustrated below:
Example 3.9 Design a 4th order high-pass IIR filter with cut-off frequency fs/16.
Solution: Execute the following MATLAB commands and proceed as for low-pass
[a b] = butter(4,0.125,’high’);
freqz(a,b);
[sos G] = tf2sos(a,b)
Wide-band band-pass and band-stop filters (fU >> 2fL) may be designed by cascading low-pass
and high-pass sections, but 'narrow band' band-pass/stop filters (fU not >> 2fL) will not be very
accurate if this cascading approach is used. The MATLAB band-pass approach always works, i.e.
for narrowband and wideband. A possible source of confusion is that specifying an order ‘2’
produces what many people (including me, Barry) would call a 4th order IIR digital filter. The
design process carried out by ‘butter’ involves the design of a low-pass prototype and then
applying a low-pass to band-pass transformation which doubles the complexity. The order
specified is the order of the prototype. So if we specify 2nd order for band-pass we get a 4th order
system function which can be re-expressed (using tf2sos) as TWO biquad sections.
Example 3.10: Design a 2nd (4th)order bandpass filter with F L = /4 , Fu = /2.
[a b] = butter(2,[0.25 0.5])
freqz(a,b);
[sos G] = tf2sos(a,b)
MATLAB output:is:
1 -2 1 1 -1.0524 0.6232
G = 0.098
Magnitude (dB)
-10
-20
-30
-40
200
Phase (degrees)
100
-100
-200
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency ( rad/sample)
Example 3.11: Design a 4th (8th)order bandpass filter with L = /4 , u = /2.
[a b] = butter(4,[0.25 0.5])
[sos G] = tf2sos(a,b)
1 2. 1 1 -0.046 0.724
1 -2 1 1 -1.244 0.793
G = 0.01
-10
-20
-30
-40
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency ( rad/sample)
0
Phase (degrees)
-200
-400
-600
-800
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency ( rad/sample)
[sos G] = tf2sos(a,b)
G = 0.347
-20
-30
-40
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency ( rad/sample)
0
Phase (degrees)
-200
-400
-600
-800
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency ( rad/sample)
IIR type digital filters have the advantage of being economical in their use of delays, multipliers
and adders. They have the disadvantage of being sensitive to coefficient round-off inaccuracies
and the effects of overflow in fixed point arithmetic. These effects can lead to instability or serious
distortion. Also, an IIR filter cannot be exactly linear phase.
FIR filters may be realised by non-recursive structures which are simpler and more convenient for
programming especially on devices specifically designed for digital signal processing. These
structures are always stable, and because there is no recursion, round-off and overflow errors are
easily controlled. A FIR filter can be exactly linear phase. The main disadvantage of FIR filters
is that large orders can be required to perform fairly simple filtering tasks.
Problems:
3.2 Show that passing any sequence {x[n]} through a system with H(z) = z - 1 produces
H(z) = ⎯⎯⎯⎯⎯
1 - 2 z -1
3.4 Draw the signal-flow graph for example 5.3, and plot its poles and zeros.
3.5 If discrete time LTI systems L1 and L2, with impulse responses {h 1 [n] } and {h 2 [n] }
respectively, are serially cascaded as shown below, calculate the overall impulse
x[n] y[n]
L1 L2
{h1[n]} {h2[n]}
3.6. Design a 4th order band-pass IIR digital filter with lower & upper cut-off
frequencies at 300 Hz & 3400 Hz when fS = 8 kHz.
3.7. Design a 4th order band-pass IIR digital filter with lower & upper cut-off
frequencies at 2000 Hz & 3000 Hz when fS = 8 kHz.
3.8. What limits how good a notch filter we can implement on a fixed point DSP processor?
3.9. What order of FIR filter would be required to implement a /4 notch approximately as good
3.10. What order of FIR low-pass filter would be required to be approx as good as the 2nd order
UNIT-IV
Design of DIGITAL FILTERS (FIR) – Structure of FIR Systems:
4.1. Introduction
x[n
] z- z-
1
. z- z-
1 1 1
aM- a
a a
y[n
Fig. 4.1 ]
Its impulse-response is {..., 0, ..., a0, a1, a2,..., aM, 0, ...} and its frequency-response is the DTFT
of the impulse-response, i.e.
M
H ( e j ) = h[n]e − jn =
n = −
a e
n =0
n
− jn
is close to some desired or target frequency-response H(ej) say. The inverse DTFT of H’(ej)
gives the required impulse-response :
1
h[n] = H (e
j
)e jn d
2 −
The methodology is to use the inverse DTFT to get an impulse-response {h[n]} & then realise
some approximation to it Note that the DTFT formula is an integral, it has complex numbers and
the range of integration is from - to , so it involves negative frequencies.
x(t )dt =
eat dt
1
= eat =
1 a
e − e− a
− −
a − a
b
(2) For any x(t), x(t )dt
a
is area under curve
H ( e j ) = h[n]e
n = −
− jn
H ( e − j ) = h[n]e
n = −
jn
If h[n] real then h[n]ej is complex-conjugate of h[n]e-j. Adding up terms gives H(e-j ) as
complex conj of H(ej).
Because of the range of integration (- to ) of the DTFT formula, it is common to plot graphs of
G() and () over the frequency range - to rather than 0 to . As G() = G(-) for a real
filter the gain-response will always be symmetric about =0
4.2. Design of an FIR low-pass digital filter
Assume we require a low-pass filter whose gain-response approximates the ideal 'brick-wall' gain-
response in Figure 4.2. G()
1
- −/3 0 /3
Fig. 4.2
If we take the phase-response () to be zero for all , the required frequency-response is:-
1 : /3
H (e j ) = G()e j ( ) =
0 : /3 < <
1 /3 1/ 3 : n=0
h[n] =
2
− /3
1e jn d =
(1 / n ) sin( n / 3) : n0
Fig 4.3a
The ideal impulse-response {h[n]} with each sample h[n] = (1/3)sinc(n/3) is therefore as follows:
{h[n]} = { ..., -0.055, -0.07, 0, 0.14, 0.28, 0.33, 0.28, 0.14, 0, -0.07, -0.055, ... }
A digital filter with this impulse-response would have exactly the ideal frequency-response we
applied to the inverse-DTFT i.e. a ‘brick-wall’ low-pass gain response & phase = 0 for all . But
{h[n]} has non-zero samples extending from n = - to , It is not a finite impulse-response. It
is also not causal since h[n] is not zero for all n < 0. It is therefore not realisable in practice.
−M M
h [ n] : n
(1) Set h[n] = 2 2
0 : otherwise
The resulting causal impulse response may be realised by setting an = h[n] for n=0,1,2,...,M.
Taking M=4, for example, the finite impulse response obtained for the /3 cut-off low-pass
specification is :
The resulting FIR filter is as shown in Figure 4.1 with a0=0.14, a1=0.28, a2=0.33, a3=0.28,
a4=0.14. ( Note: a 4th order FIR filter has 4 delays & 5 multiplier coefficients ).
The gain & phase responses of this FIR filter are sketched below.
Fig. 4.4
Clearly, the effect of the truncation of {h[n]} to M/2 and the M/2 samples delay is to produce
gain and phase responses which are different from those originally specified.Considering the gain-
response first, the cut-off rate is by no means sharp, and two ‘ripples’ appear in the stop-band, the
peak of the first one being at about -21dB.
The phase-response is not zero for all values of as was originally specified, but is linear phase (
i.e. a straight line graph through the origin ) in the pass-band of the low-pass filter ( -/3 to /3 )
with slope arctan( M/2 ) with M = 4 in this case. This means that ( ) = − ( M/2 ) for | |
/3; i.e. we get a linear phase-response ( for | | /3 ) with a phase-delay of M/2 samples.It
may be shown that the phase-response is linear phase because the truncation was done
symmetrically about n=0.Now let’s try to improve the low-pass filter by increasing the order to
ten. Taking 11 terms of { (1 / 3) sinc (n / 3) } we get, after delaying by 5 samples:
freqz( [-0.055, -0.069, 0, 0.138, 0.276, 0.333, 0.276, 0.138, 0, -0.069, -0.055] );
In may be seen in the gain-response, as reproduced below, that the cut-off rate for the 10th order
FIR filter is sharper than for the 4th order case, there are more stop-band ripples and, rather
disappointingly, the gain at the peak of the first ripple after the cut-off remains at about -21 dB.
This effect is due to a well known property of Fourier series approximations, known as Gibb's
phenomenon. The phase-response is linear phase in the passband ( -/3 to /3 ) with a phase delay
of 5 samples. As seen in fig 4.6, going to 20th order produces even faster cut-off rates and more
stop-band ripples, but the main stop-band ripple remains at about -21dB. This trend continues
with 40th and higher orders as may be easily verified. To improve matters we need to discuss
‘windowing’.
Fig 4.5: Gain response of tenth order lowpass FIR filter with C = /3
4.3. Windowing:
To design the FIR filters considered up to now we effectively multiplied {h[n]} as calculated by
the inverse DTFT by a rectangular window sequence {rM+1[n]} where
1 : - M/2 n M/2
rM +1[n] =
0 : n M /2
This causes a sudden transition to zero at the window edges and it is these transitions that produce
the stop-band ripples. To understand why, we need to know that the DTFT of {r M+1[n]} is as
follows:
sin(( M + 1) / 2)
j : 0, 2 ,...
RM +1 (e ) = sin( / 2)
M +1 : otherwise
DEPARTMENT OF Electronics and Communication Engineering
Note that {rM+1[n]} has non-zero values at M+1 values of n which include n = 0. A graph of
RM+1(ej) against for M=20 is shown below. It is purely real and this is because because rM+1(n)
is symmetric about n=0.
Dirichelet K of order 20
20
15
10
R
-5
-3 -2 -1 0 1 2 3
radians/sample
It looks rather like a ‘sinc’ function in that it has a ‘main lobe’ and ‘ripples’. It is a little different
from a sinc in that the ripples do not die away quite as fast. It is these ripples that cause the ripples
in the stop-bands of FIR digital filters. It may be shown that:
(a) the height of RM+1(ej) is M+1
(b) the area under the main lobe remains approximately 1 for all values of M.
Frequency-domain convolution:
Multiplying two sequences {x[n]} & {[y[n]} to produce {x[n].y[n]} is called ‘time-domain
multiplication’.
It may be shown that if {x[n]} has DTFT X(ej) & [y[n]} has DTFT Y(ej) then the DTFT of
{x[n].y[n]} is:
If H(ej) has an ideal ‘brick-wall’ gain-response, & RM+1(ej) is as shown in previous graph,
convolving H(ej) with RM+1(ej) reduces the cut-off rate and introduces stop-band ripples.
The sharper the main-lobe of RM+1(ej) and the lower the ripples, the better.
For any given frequency , this is 1/(2) times the area under the curve as from −/3 to +/3.
Consider what happens as increases from 0 towards ? When = 0, the whole area of the main
lobe of RM+1(ej ) is included in the integral and the gain G() is close to 1 (= 0 dB). When =
/3 the gain drops to approximately 0.5 (= -6 dB) since only half the area of the main lobe is
included. When is further increased above /3, the area of the main lobe becomes less and less
included. So the gain becomes very small as only the ripples are being integreated. Because of
the ripples, for some values of in the stop-band, some the positive and negative areas will cancel
out, giving a gain of zero (− dB). For other values of in the stop-band, there will be slightly
more positive area than negative area, or vice–versa. Thus the stop-band ripples are created.
So we conclude that ripples in the gain-responses of FIR digital filters designed with rectangular
windows arise from the frequency-domain convolution between the ideal (target) frequency
response and the DTFT RM+1(ej) of the rectangular window. The gain at the cut-off frequency
will be approximately –6 dB less than the pass-band gain (normally 0 dB) because only half the
main lobe lies within the ideal filter’s pass-band. Ripples also occur in the pass-band but we can
hardly notice them.
Many other types of window sequence exist (e.g. Hamming, Kaiser ). Also, slightly different
formulae exist for the Hann window. The formula used by MATLAB is different so stick to mine
for the moment. Perhaps the most well known window is the Hamming window whose formula
is very similar to that of the Hann window. It is also a 'raised cosine' window and has a similar
effect except that it is generally considered to be slightly better. The (M+1) th order Hamming
window formula is:
Note that this has non-zero values at M+1 values of n including n=0.
20 Hann
Rect
15
10
R
-5
-3 -2 -1 0 1 2 3
radians/sample
To draw this graph it was necessary to derive the DTFT of {w M[n]} for a Hann window. Don’t
worry about this for now. Just look at the graph.
Comparing the DTFT of a rectangular and a Hann window of the same order:
(a) The height of WM+1(ej) is (M+1)/2 which is half that of RM+1(ej)
(b) The main lobe of WM+1(ej) is about twice the width of the main lobe of RM+1(ej).
(c) It may be shown that the area of the main lobe of WM+1(ej) remains about 1.
(d) The ripples in WM+1(ej) are greatly reduced in comparison to RM+1(ej). This is
good!
The main lobe of WM+1(ej), being less sharp than that of RM+1(ej), will reduce the sharpness
of the cut-off of the filter designed with the Hann window. This is the price to be paid for reducing
stop-band ripples. Note that the ripples have not been eliminated entirely.
When M = 4, the Hann window {wM+1[n]} = {..,0,..,0, 0.25, 0.75, 1, .75, .25, 0,..,0,..}
Multiplying term by term and delaying by M / 2 = 2 samples we obtain the finite impulse-response:
The resulting "Hann-windowed" FIR filter of order 4 is as shown in Figure 4.1 with a0=0.04,
a1=0.21, etc. Its gain-response is as shown in Figure 4.9.
The Hann window gradually reduces the amplitude of the ideal impulse-response towards zero at
the edges of the window rather than truncating it as does the rectangular window. As seen in the
graph above, the effect on the gain-response of the FIR filter obtained is:
i) to greatly reduce stop-band ripples ( good ).
The phase-response is not affected in the passband. We can improve the cut-off rate by going to
higher orders. The graphs below are for 10th and 20th order ( Hann windowed ):
MATLAB program to design & graph 10th order FIR lowpass filter with Hann window:
M=10; Fs=8000;
w=0.5*(1+cos(pi*n/(1+M/2)));
a(1+M/2+n) = 0.3333*sinc(n/3)*w;
end;
freqz(a,1,1000,Fs);
axis([0,Fs/2,-50,0]);
‘fir1’ uses ‘windowing method’ we have just seen. By default it uses a Hamming window which
is very similar to the Hann window.
To design 10th order FIR lowpass filter with Hamming window & cut-off frequency /3 ( Fs/6):
c=fir1(10, 0.33);
It may be observed that the gain at the peak of the first stop-band ripple has been reduced from
about
-21 dB to about –44 dB. At the frequency of the first stop-band ripple since 20log10(1/10) =-20
dB, the amplitude of the signal being filtered is reduced by factor 10 when the FIR filter is
designed with a rectangular window.
Since 20 log10(1/100) = -40, the same amplitude will be reduced by factor >100 when the FIR
filter is designed with a Hann window.
If this is not good enough we must use a different window (e.g. Hamming, Kaiser)
The Kaiser window offers a range of options from rectangular, through Hamming towards even
lower stop-band ripples.
KW = kaiser(M+1,beta)
produces a Kaiser window array of length M+1 for any value of beta > 0.
When beta () = 0, this is a rectangular window, and when beta = 5.4414 we get a Hamming
window.
Increasing beta further gives further reduced stop-band ripples with a reduced cut-off sharpness.
A slight complication is that MATLAB arrays must start at index 1, rather than –M/2. So we have
to add 1+M/2 to each value of n to find the right entry in array KW. It may be shown that:
See Slides 84-90 for gain responses of FIR low-pass filters designed using Kaiser windows with
different values of beta. These are compared with filters designed with Hamming windows.
0 : || /4
H (e ) = 1 : / 4 | | / 2
j
0 : /2 ||
and applying the inverse DTFT gives the following formula (not forgetting negative values of :
1 1 − / 4 1 / 2 − jn
h[n] =
2
−
H (e j )e − jn d =
2
− /2
1 e − jn d +
2 / 4
1 e d
Evaluating the integrals in this expression will produce a nice formula for h[n].
Truncating symmetrically, windowing and delaying this sequence, as for the low-pass case earlier,
produces the causal impulse-response of an FIR filter of the required order. This will be linear
phase for the same reasons as in the low-pass case. Its impulse-response will be symmetric about
n=M/2 when M is the order. MATLAB is able to design high-pass, band-pass and band-stop linear
phase FIR digital filters by this method.
c = fir1(10,[0.2 0.4],'bandpass') designs a 10th order band-pass filter with cut-off frequencies
0.2 and 0.4.
c = fir1(20,[0.2 0.4],'stop') designs a 20th order band-stop filter with cut-off frequencies 0.2 and
0.4.
By default ‘fir1’ uses a ‘Hamming’ window (very similar to a Hann) and scales the pass-band gain
to 0 dB. Linear phase response is obtained.
To design an FIR digital filter of even order M, with gain response G() and linear phase, by the
windowing method,
Instead of obtaining H( ej ) = G( ), we get e-jM/2G() with G() a distorted version of
G() the distortion being due to windowing.
The phase-response is therefore () = -M/2 which is a linear phase-response with phase-delay
M/2 samples at all frequencies in the range 0 to . This is because -() / = M/2 for all .
Notice that the filter coefficients, and hence the impulse-response of each of the digital filters we
have designed so far are symmetric in that h[n] = h[M-n] for all n in the range 0 to M where M is
the order. If M is even, this means that h[M/2 - n] = h[M/2 + n] for all n in the range 0 to M/2.
The impulse response is then said to be 'symmetric' about sample M/2. The following example
illustrates this for an example where M=6 and there are seven non-zero samples within {h[n]}:
{… 0, …, 0, 2, -3, 5, 7, 5, -3, 2, 0, …,0, … }
The most usual case is where M is even, but, for completeness, we should briefly consider the case
where M is odd. In this case, we can still say that {h[n]} is 'symmetric about M/2' even though
sample M/2 does not exist. The following example illustrates the point for an example where
M=5 and {h[n]} therefore has six non-zero sample:
(…, 0,…, 0, 1, 3, 5, 5, 3, 1, 0, …, 0, …}
When M is odd, h[(M-1)/2 - n] = h[(M+1)/2 + n] for n = 0, 1, …, (M-1)/2.
It may be shown that FIR digital filters whose impulse-responses are symmetric in this way are
linear phase. We can easily illustrate this for either of the two examples just given. Take the
second. Its frequency-response is the DTFT of {h[n]} i.e.
It is also possible to design FIR filters which are not linear phase. The technique described in this
section is known as the ‘windowing’ technique or the ‘Fourier series approximation technique’.
11) The design of linear phase FIR digital filters by the windowing technique discussed above
is readily carried out by the command 'FIR1' provided by the 'signal processing toolbox' in
MATLAB. The filter may be applied to a segment of sampled sound stored in an array by the
command 'filter'. To illustrate the use of these commands we now design and implement a 128 th
order FIR band-pass digital filter with lower and upper cut-off frequencies 300 Hz and 3.4 k Hz
respectively and apply it to a wav file containing mono music sampled at 11.025 kHz. (This makes
the music sound like it has been transmitted over a wired telephone line.)
12) Notes:
13) (1) The FIR cut-off frequencies must be specified relative to fS/2 rather than in
radians/sample. This is achieved by dividing each cut-off frequency in radians/sample by .
14) (2) By default FIR1 uses a Hamming window. Other available windows such as Hann can
be specified with an optional trailing argument.
15) (3) By default, the filter is scaled so the center of the first pass-band has magnitude exactly
one after windowing.
16)
clear all;
19) y = filter(a, 1, x );
wavwrite(x,fs,nbits,'capnew.wav');
An FIR digital filter design technique which is better than the windowing technique, but more
complicated, is known as the ‘Remez exchange algorithm’. It was developed by McClelland and
Parks and is available in MATLAB. The following MATLAB program designs a 40 th order FIR
low-pass filter whose gain is specified to be unity ( i.e. 0 dB ) in the range 0 to 0.3 radians/sample
and zero in the range 0.4 to . The gain in the “ transition band ” between 0.3 and 0.4 is not
specified. The 41 coefficients will be found in array ‘a’. Notice that, in contrast to the gain-
responses obtained from the 'windowing' technique, the Remez exchange algorithm produces
'equi-ripple' gain-responses (fig 4.14) where the peaks of the stop-band ripples are equal rather
than decreasing with increasing frequency. The highest peak in the stop-band will be lower than
freqz (a,1,1000,Fs);
Fig 4.14: Gain response of 40th order FIR lowpass filter designed by “ Remez ”
MATLAB programs were presented in Section 3 to illustrate how an FIR digital filter as designed
above may be implemented on a computer or microprocessor. These programs made full use of
the powerful and highly accurate floating point arithmetic operations available in MATLAB.
However FIR digital filters are often implemented in mobile battery powered equipment such as
a mobile phone where a floating point processor would be impractical as it would consume too
much power. Low power fixed point DSP processors are the norm for such equipment, typically
with a basic 16-bit word-length. Such processors must be programmed essentially using only
integer arithmetic, and it is interesting now to consider how an FIR filter could be programmed in
this way.
Take as a simple example the 4th order FIR low-pass digital filter designed earlier with impulse
response: {..,0,..,0, .04, .21,.33,.21,.04, 0,..,0,..}. Rounding each coefficient to the nearest integer
would clearly be a mistake because they would all become trivially zero. The solution is simple:
The larger the constant, the less the effect or rounding and the more accurate the coefficients.
However we must be careful not to choose too large a constant because the number of bits per
word is limited often to 16. If the integers produced during the calculation get too large, we risk
overflow and very serious non-linear distortion. Where the result of an addition of positive
integers is too large for the available 16-bits, it may become, by 'wrap-around', a large negative
number which may cause very serious distortion. Similarly the addition of two negative numbers
could wrap around to a large positive number if overflow occurs. So here we have an difficult
balancing act to perform between coefficient inaccuracy and overflow. Modern fixed point DSP
processors offer useful facilities and extra bits to help with the design of fixed point DSP programs,
but this remains a difficult area in general.
Fortunately, FIR digital filters are particularly easy to program in fixed point arithmetic and this
is one of their main advantages. For one thing, unlike IIR digital filters, they can never become
unstable as there is no feedback. The effects of rounding and overflow can be a nightmare when
they are fed back recusively as with IIR filters.
In some cases, overflows (with wrap-around) can be allowed to occur repeatedly with an FIR filter
with the sure knowledge that they will eventually be cancelled out by corresponding wrap-around
overflows in the opposite sense. So we can generally risk overflow more readily with FIR digital
filters than with IIR digital filters, and thus have greater coefficient accuracy.
Fixed point DSP programmers often describe the coefficient scaling proceess used above, i.e.
multiplying by constants which are powers of two, in a slightly different way. Scaling by 1024,
is adopting a 'Q-format' of ten and the programmer is effectively assuming a decimal point (or
binary point) to exist ten bit positions from the right within the 16-bit word. The decimal point is
'invisible' and often nothing is stored, apart from judicious comments. The programmer
him/herself must keep track of the Q-formats used in different parts of complex programs and
make sure the correct compensation (e.g. dividing by 1024) is applied when needed to produce a
correct output. The effects of limited word-length fixed point arithmetic may be studied in
MATLAB by restricting programs to use integers and integer operations only. A MATLAB
implementation of the 4th order low-pass filter mentioned above using integer arithmetic only is
now given.
x = [0 0 0 0 0 ] ;
while 1
Y = A(1)*x(1);
for k = 5 : -1: 2
Y = Y + A(k)*x(k);
x(k) = x(k-1);
end;
Y = round( Y / 100) ;
end;
FIR filters are easy to program in fixed point arithmetic. They never become unstable as there is
no feedback. They can be exactly linear phase.
In some cases, overflows can be allowed to occur since if gain is never greater than 1, you know
that a positive overflow will eventually be cancelled out by a negative overflow or vice versa.•
This works if you do not use ‘saturation mode’ arithmetic which avoids ‘wrap-round’. Can risk
overflow more readily with FIR digital filters than with IIR digital filters, & thus have greater
coefficient accuracy.
4.1 Design a 10th order FIR low-pass digital filter with cut-off at Fs/4 frequency with & without
a Hann window. Use MATLAB to compare the gain-responses obtained.
4.2 Design a tenth order FIR bandpass digital filter with lower and upper cut-off frequencies at
/8 and /3 respectively.
4.3 Write a MATLAB program for one of these filters using integer arithmetic only.
4.4 Design a 4th order FIR high-pass filter with cut-off frequency at /3.
4.7. Design a sixth order linear phase FIR filter whose gain-response approximates that shown
in fig 4.13. Plot its gain-response using MATLAB.
4.8. Show that a linear phase lead of ( ) = -k corresponds to a delay of k samples.
Answer to 4.6: For linear phase, impulse-response must be symmetric about some value of n, say
n=M. If it is an IIR it goes on for ever as n → . So it must go on for ever backwards as n → -.
Would have to be non-zero for values on n<0; i.e. non-causal
But the most common reason is that multirate DSP can greatly increase processing efficiency (even by
orders of magnitude!), which reduces DSP system cost. This makes the subject of multirate DSP vital to all
professional DSP practitioners.
1. Resampling:To combine decimation and interpolation in order to change the sampling rate by a
fractional value that can be expressed as a ratio. For example, to resample by a factor of 1.5, you just
interpolate by a factor of 3 then decimate by a factor of 2 (to change the sampling rate by a factor of
3/2=1.5.)
Decimation
2.1 Basics
2.1.1 What are “decimation” and “downsampling”?
Loosely speaking, “decimation” is the process of reducing the sampling rate. In practice, this usually implies
lowpass-filtering a signal, then throwing away some of its samples.
“Downsampling” is a more specific term which refers to just the process of throwing away samples, without
the lowpass filtering operation. Throughout this FAQ, though, we’ll just use the term “decimation” loosely,
sometimes to mean “downsampling”.
Tip: You can remember that “M” is the symbol for decimation factor by thinking of “deci-M-ation”. (Exercise
for the student: which letter is used as the symbol for interpo-L-ation factor?)
Almost anything you do to/with the signal can be done with fewer operations at a lower sample rate, and the
workload is almost always reduced by more than a factor of M.
For example, if you double the sample rate, an equivalent filter will require four times as many operations to
implement. This is because both amount of data (per second) and the length of the filter increase by two, so
convolution goes up by four. Thus, if you can halve the sample rate, you can decrease the work load by a
factor of four. I guess you could say that if you reduce the sample rate by M, the workload for a filter goes
down to (1/M)^2.
In most cases, though, you’ll end up lowpass-filtering your signal prior to downsampling, in order to enforce
the Nyquist criteria at the post-decimation rate. For example, suppose you have a signal sampled at a rate of
30 kHz, whose highest frequency component is 10 kHz (which is less than the Nyquist frequency of 15 kHz).
If you wish to reduce the sampling rate by a factor of three to 10 kHz, you must ensure that you have no
components greater than 5 kHz, which is the Nyquist frequency for the reduced rate. However, since the
original signal has components up to 10 kHz, you must lowpass-filter the signal prior to downsampling to
remove all components above 5 kHz so that no aliasing will occur when downsampling.
2.2 Multistage
2.2.1 Can I decimate in multiple stages?
Yes, so long as the decimation factor, M, is not a prime number. For example, to decimate by a factor of 15,
you could decimate by 5, then decimate by 3. The more prime factors M has, the more choices you have. For
example you could decimate by a factor of 24 using:
• one stage: 24
• two stages: 6 and 4, or 8 and 3
• three stages: 4, 3, and 2
• four stages: 3, 2, 2, and 2
2.2.3 OK, so how do I figure out the optimum number of stages, and the decimation factor at each stage?
That’s a tough one. There isn’t a simple answer to this one: the answer varies depending on many things, so
if you really want to find the optimum, you have to evaluate the resource requirements of each possibility.
However, here are a couple of rules of thumb which may help narrow down the choices:
2.3 Implementation
2.3.1 How do I implement decimation?
Decimation consists of the processes of lowpass filtering, followed by downsampling.
To implement the filtering part, you can use either FIR or IIR filters.
To implement the downsampling part (by a downsampling factor of “M”) simply keep every Mth sample, and
throw away the M-1samples in between. For example, to decimate by 4, keep every fourth sample, and throw
three out of every four samples away.
Beauty, eh?
2.3.3 If I’m going to throw away most of the lowpass filter’s outputs, why bother to calculate them in the first
place?
You may be onto something. In the case of FIR filters, any output is a function only of the past inputs (because
there is no feedback). Therefore, you only have to calculate outputs which will be used.
For IIR filters, you still have to do part or all of the filter calculation for each input, even when the
corresponding output won’t be used. (Depending on the filter topology used, certain feed-forward parts of the
calculation can be omitted.),. The reason is that outputs you do use are affected by the feedback from the
outputs you don’t use.
The fact that only the outputs which will be used have to be calculated explains why decimating filters are
almost always implemented using FIR filters!
A simple way to think of the amount of computation required to implement a FIR decimator is that it is equal
to the computation required for a non-decimating N-tap filter operating at the output rate.
1. The passband lower frequency is zero; the passband upper frequency is whatever information bandwidth
you want to preserve after decimating. The passband ripple is whatever your application can tolerate.
2. The stopband lower frequency is half the output rate minus the passband upper frequency. The stopband
attenuation is set according to whatever aliasing your application can stand. (Note that there will
always be aliasing in a decimator, but you just reduce it to a negligible value with the decimating filter.)
3. As with any FIR, the number of taps is whatever is required to meet the passband and stopband
specifications.
1. A special case of a decimator is an “ordinary” FIR. When given a value of “1” for M, a decimator should
act exactly like an ordinary FIR. You can then do impulse, step, and sine tests on it just like you can on
an ordinary FIR.
2. If you put in a sine whose frequency is within the decimator’s passband, the output should be distortion-
free (once the filter reaches steady-state), and the frequency of the output should be the same as the
frequency of the input, in terms of absolute Hz.
3. You also can extend the “impulse response” test used for ordinary FIRs by using a “fat impulse”,
consisting of M consecutive “1” samples followed by a series of “0” samples. In that case, if the
decimator has been implemented correctly, the output will not be the literal FIR filter coefficients, but
will be the sum of every subset of M coefficients.
4. You can use a step response test. Given a unity-valued step input, the output should be the sum of the
FIR coefficients once the filter has reached steady state.
3.1 Basics
3.1.1 What are “upsampling” and “interpolation”?
“Upsampling” is the process of inserting zero-valued samples between original samples to increase the
sampling rate. (This is called “zero-stuffing”.) Upsampling adds to the original signal undesired spectral
images which are centered on multiples of the original sampling rate.
“Interpolation”, in the DSP sense, is the process of upsampling followed by filtering. (The filtering removes
the undesired spectral images.) As a linear process, the DSP sense of interpolation is somewhat different
from the “math” sense of interpolation, but the result is conceptually similar: to create “in-between” samples
from the original samples. The result is as if you had just originally sampled your signal at the higher rate.
Tip: You can remember that “L” is the symbol for interpolation factor by thinking of “interpo-L-ation”.
3.1.6 OK, you know what I mean…do I always need to do interpolation (upsampling followed by filtering)
or can I get by with doing just upsampling?
Upsampling adds undesired spectral images to the signal at multiples of the original sampling rate, so unless
you remove those by filtering, the upsampled signal is not the same as the original: it’s distorted.
Some applications may be able to tolerate that, for example, if the images get removed later by an analog
filter, but in most applications you will have to remove the undesired images via digital filtering. Therefore,
interpolation is far more common that upsampling alone.
3.2 Multistage
3.2.1 Can I interpolate in multiple stages?
Yes, so long as the interpolation ratio, L, is not a prime number. For example, to interpolate by a factor of
15, you could interpolate by 3 then interpolate by 5. The more factors L has, the more choices you have. For
example you could interpolate by 16 in:
3.2.3 OK, so how do I figure out the optimum number of stages, and the interpolation ratio at each stage?
There isn’t a simple answer to this one: the answer varies depending on many things. However, here are a
couple of rules of thumb:
3.3 Implementation
3.3.1 How do I implement interpolation?
Interpolation always consists of two processes:
1. Inserting L-1 zero-valued samples between each pair of input samples. This operation is called “zero
stuffing”.
2. Lowpass-filtering the result.
The result (assuming an ideal interpolation filter) is a signal at L times the original sampling rate which has
the same spectrum over the input Nyquist (0 to Fs/2) range, and with zero spectral content above the
original Fs/2.
1. The zero-stuffing creates a higher-rate signal whose spectrum is the same as the original over the
original bandwidth, but has images of the original spectrum centered on multiples of the original
sampling rate.
2. The lowpass filtering eliminates the images.
3.3.3 Why do interpolation by zero-stuffing? Doesn’t it make more sense to create the additional samples by
just copying the original samples?
This idea is appealing because, intuitively, this “stairstep” output seems more similar to the original than the
zero-stuffed version. But in this case, intuition leads us down the garden path. This process causes a “zero-
order hold” distortion in the original passband, and still creates undesired images (see below).
Although these effects could be un-done by filtering, it turns out that zero-stuffing approach is not only
more “correct”, it actually reduces the amount of computation required to implement a FIR interpolation
filter. Therefore, interpolation is always done via zero-stuffing.
The net result is that to interpolate by a factor of L, you calculate L outputs for each input using L different
“sub-filters” derived from your original filter.
x2 0 0 0 x1 0 0 0 x0 0 0 0 x2·h0+x1·h4+x0·h8
0 x2 0 0 0 x1 0 0 0 x0 0 0 x2·h1+x1·h5+x0·h9
0 0 x2 0 0 0 x1 0 0 0 x0 0 x2·h2+x1·h6+x0·h10
0 0 0 x2 0 0 0 x1 0 0 0 x0 x2·h3+x1·h7+x0·h11
• Since the interpolation ratio is four (L=4), there are four “sub-filters” (whose coefficient sets are
marked here with matching colors.) These sub-filters are officially called “polyphase filters”.
• For each input, we calculate L outputs by doing L basic FIR calculations, each using a different set of
coefficients.
• The number of taps per polyphase filter is 3, or, expressed as a formula: Npoly=Ntotal / L.
• The coefficients of each polyphase filter can be determined by skipping every Lth coefficient, starting
at coefficients 0 through L-1, to calculate corresponding outputs 0 through L-1.
• Alternatively, if you rearranged your coefficients in advance in “scrambled” order like this:
h0, h4, h8, h1, h5, h9, h2, h6, h10, h3, h7, h11
then you could just step through them in order.
• We have hinted here at the fact that N should be a multiple of L. This isn’t absolutely necessary, but if
N isn’t a multiple of L, the added complication of using a non-multiple of L often isn’t worth it. So if
the minimum number of taps that your filter specification requires doesn’t happen to be a multiple of
L, your best bet is usually to just increase N to the next multiple of L. You can do this either by adding
some zero-valued coefficients onto the end of the filter, or by re-designing the filter using the larger N
value.
A simple way to think of the amount of computation required to implement a FIR interpolator is that it is
equal to the computation required for a non-interpolating N-tap filter operating at the input rate. In effect,
you have to calculate L filters using N/L taps each, so that’s N total taps calculated per input.
1. TBD
3.5 Implementation
3.5.1 How do I implement a FIR interpolator?
An interpolating FIR is actually the same as a regular FIR, except that, for each input, you calculate L
outputs per input using L polyphase filters, each having N/L taps. More specifically:
1. Store a sample in the delay line. (The size of the delay line is N/L.)
2. For each of L polyphase coefficient sets, calculate an output as the sum-of-products of the delay line
values and the filter coefficients.
3. Shift the delay line by one to make room for the next input.
Also, just as with ordinary FIRs, circular buffers can be used to eliminate the requirement to literally shift
the data in the delay line.
1. A special case of an interpolator is an ordinary FIR. When given a value of 1 for L, an interpolator
should act exactly like an ordinary FIR. You can then do impulse, step, and sine tests on it just like you
can on an ordinary FIR.
2. If you put in a sine whose frequency is within the interpolator’s passband, the output should be
distortion-free (once the filter reaches steady state), and the frequency of the output should be the same
as the frequency of the input, in terms of absolute Hz.
3. You can use a step response test. Given a unity-valued step input, every group of L outputs should be
the same as the sums of the coefficients of the L individual polyphase filters, once the filter has
reached steady state.
• Correlation
• Geortzel algorithm
• FIR Least square design
methods
• Multi stage implementation of sampling rate
conversion
Correlation
The concept of correlation can best be presented with an example. Figure 7-13 shows the key
elements of a radar system. A specially designed antenna transmits a short burst of radio wave
energy in a selected direction. If the propagating wave strikes an object, such as the helicopter in
this illustration, a small fraction of the energy is reflected back toward a radio receiver located
near the transmitter. The transmitted pulse is a specific shape that we have selected, such as the
triangle shown in this example. The received signal will consist of two parts: (1) a shifted and
scaled version of the transmitted pulse, and (2) random noise, resulting from interfering radio
waves, thermal noise in the electronics, etc. Since radio signals travel at a known rate, the speed
of light, the shift between the transmitted and received pulse is a direct measure of the distance to
the object being detected. This is the problem: given a signal of some known shape, what is the
best way to determine where (or if) the signal occurs in another signal. Correlation is the answer.
What if the target signal contains samples with a negative value? Nothing changes. Imagine that
the correlation machine is positioned such that the target signal is perfectly aligned with the
matching waveform in the received signal. As samples from the received signal fall into the
correlation machine, they are multiplied by their matching samples in the target signal.
Neglecting noise, a positive sample will be multiplied by itself, resulting in a positive number.
Likewise, a negative sample will be multiplied by itself, also resulting in a positive number.
If there is noise on the received signal, there will also be noise on the cross-correlation signal. It
is an unavoidable fact that random noise looks a certain amount like any target signal you can
choose. The noise on the cross-correlation signal is simply measuring this similarity. Except for
this noise, the peak generated in the cross-correlation signal is symmetrical between its left and
right. This is true even if the target signal isn't symmetrical. In addition, the width of the peak is
twice the width of the target signal. Remember, the cross-correlation is trying to detect the target
signal, not recreate it. There is no reason to expect that the peak will even look like the target
signal.
Correlation is the optimal technique for detecting a known waveform in random noise. That is,
the peak is higher above the noise using correlation than can be produced by any other linear
system. (To be perfectly correct, it is only optimal for random white noise). Using correlation to
detect a known waveform is frequently called matched filtering.
The correlation machine and convolution machine are identical, except for one small difference.
As discussed in the last chapter, the signal inside of the convolution machine is flipped left-for-
right. This means that samples numbers: 1, 2, 3 … run from the right to the left. In the
correlation machine this flip doesn't take place, and the samples run in the normal direction.
The Goertzel algorithm is a digital signal processing (DSP) technique for identifying frequency
components of a signal, published by Gerald Goertzel in 1958. While the general Fast Fourier
transform (FFT) algorithm computes evenly across the bandwidth of the incoming signal, the
Goertzel algorithm looks at specific, predetermined frequencies.
A practical application of this algorithm is recognition of the DTMF tones produced by the
buttons pushed on a telephone keypad
It can also be used "in reverse" as a sinusoid synthesis function, which requires only 1
multiplication and 1 subtraction per sample.
Explanation of algorithm
The Goertzel algorithm computes a sequence, s(n), given an input sequence, x(n):
or, the equation for the (n + 1)-sample DFT of x, evaluated for ω and
multiplied by the scale factor e + 2πiωn.
Note that applying the additional transform Y(z)/S(z) only requires the
last two samples of the s sequence. Consequently, upon processing N
samples x(0)...x(N − 1), the last two samples from the s sequence can be
used to compute the value of a DFT bin, which corresponds to the chosen
frequency ω as
For the special case often found when computing DFT bins, where
ωN = k for some integer, k, this simplifies to
Let the FIR filter length be L+1 samples, with even, and suppose we'll initially design it to
be centered about the time origin. Then the frequency response is given on our frequency grid
by
Enforcing even symmetry in the impulse response, i.e., , gives a zero phase
or in matrix
form:
(Note that Remez exchange algorithms are also based on this formulation
internally.)
where
This is quadratic in , hence it has a global minimum which we can find by taking the
derivative, setting it to zero, and solving for . Doing this yields:
These are the famous normal equations whose solution is given by:
The matrix
Thus, the desired vector is the vector sum of its best least-squares approximation
plus an orthogonal error :
In practice, the least-squares solution can be found by minimizing the sum of squared
errors:
Figure 4.28 suggests that the error vector is orthogonal to the column space of
the matrix , hence it must be orthogonal to each column in :
Note that the pseudo-inverse projects the vector onto the column space of .
(Note: To obtain the best numerical algorithms for least-squares solution in Matlab, it is
usually better to use ``x = A b'' rather than explicitly computing the pseudo-inverse as in ``x
= pinv(A) * b''.)
The decimator and interpolator discussed so far are of a single-stage structure. When
large changes in sampling rate are required, multiple stages of sample rate conversion
are found
(decimation)
realization. The decimation in Figure 3.23 can be realized in two stages if the
decimation factor
D can be expressed as a product of two integers, D1 and D2. Referring to Figure 3.24,
in the first stage, the signal x(n) is decimated by a factor of D1. The output, v(p) is
by
D = (D1D2). The filters H1(z) and H2(z) are so designed that the aliasing in the band of
interest is belowa prescribed level and that the overall passband and stopband tolerances are
met. This multi-stage sampling rate conversion system offers less computation and more
flexibility in filter design. An example is given below to illustrate the idea of multi -stage
We have a discrete time signal with a sampling rate of 90 kHz. The signal has the
desired information in the frequency band from 0 to 450 Hz (passband), and the band from
450 to 500 Hz is the transition band. The signal is to be decimated by a factor of ninety.
The required tolerances are a passband ripple of 0.002 and a stopband ripple of 0.001.
According to the formula by Kaiser, the approximate length of an FIR filter is given
where peak passband ripple (linear) δp = 0.002, peak stopband ripple (linear) δs = 0.001,
From Equation 3.37, the lowpass FIR filter H(z) has a length of N ≈ 5424. Therefore,
the number of multiplications per second, Msec, needed for this single-stage decimator is
Since only one out of ninety samples is actually used, the computation rate is based on the
Let us now consider the two-stage implementation of the decimation process as shown in
Figure 3.26.
Due to the cascade decomposition, each of the two filters, H1(z) and H2(z), must have
a linear passband ripple specification half of that specified for the single-stage filter, H(z).
The stopband ripple specifications for these two filters can be the same as that o f H(z)
since the cascade connection will only reduce the stopband ripple.
Stage One
The first stage will decimate the input signal x(n) by a factor of forty-five. The filter
The reason for choosing this value of the stopband edge is that, after decim ation by a
factor of forty-five, the residual energy of the signal in the band from 1000 to 2000 Hz will
be aliased back to the band from 0 to 1000 Hz. Due to the attenuation in the stopband, the
energy of the signal in the band from 1800 to 2000 Hz is very small compared to that in
1000 to 1800 Hz. So the amount of aliasing in the desired band of interest (0 to 450 Hz)
According to Equation 3.37, the approximate length of the FIR filter, H1(z) is N1 =
276. The number of multiplications per second for the first stage is
Figure 3.28 shows the characteristics of H2(z). This stage will perform a decimation
of factor two on the output signal of the first stage. So, the total decimation of x(n) is by a
For the second stage, the length of the filter, as calculated from Equation 3.37, is N2 =
The total number of multiplications per second required for the two-stage implementation
So, the two-stage implementation requires only of the operation required of the
single-stage implementation
Stage One
In this stage, decimation by fifteen is performed on the input signal x(n). The
characteristics of the LPF, H1(z), are shown in Figure 3.30. The filter specifications
are
As in the two-stage case, the choice of stopband edge frequency can be extended to the
point for which negligible aliasing occurs in the passband (band of interest).
The approximate length of the filter as given by Equation 3.37 is N1 = 60. The
Stage Two
In this stage, a decimation by a factor of three is done. The specifications of the LPF in this
As before, the stopband edge frequency can be stretched out to 1800 Hz. The filter
The length of filter required for this stage is N2 = 20 and the number of multiplications per
second is
Stage Three
The third stage performs a decimation of factor two on the output of the second stage. The
From this example, we can see that a significant saving in computation as well as in storage can
be achieved by a multi-stage decimator and interpolator design. These savings depend on the
optimum design of the number of stages and the choice of decimation factor for the individual
stages.
The examples illustrate the many different combinations and ordering possible. One approach is
to determine the sets of I and D factors that satisfy the filtering requirements and then estimate the
storage and computational costs for each set. The lowest cost solution is then selected.
Walsh transform:
1D walsh transform :
2D Walsh transform:
Hadmard Transform:
where
1. Comparison of received signal with the reference signal (not just by cross correlation but
by using time shift parameter).
3. To differentiate between the signals based on geographical position of the signal source.
9. Determine the system function H(z) of the lowest order Chebyshev digital filter that meet the
following specifications:
b. At least 50-dB attenuation in the stop band 0.35 ≤│w│≤ by using Bilinear transformatio
technique.
10. Design a Butterworth analog low-pass filter is required to meet the following specifications:
1.a) Define an LTI System and show that the output of an LTI system is
given by the convolution of Input sequence and impulse response.
b) Prove that the system defined by the following difference equation
is an LTI
system y(n) = x(n+1)-3x(n)+x(n-1) ; n≥0.
[8+8
]
2.a) Define DFT and IDFT. State any Four properties of DFT.
b) Find 8-Point DFT of the given time domain sequence x(n) = {1, 2, 3,
4}. [8+8]
3.a) Derive the expressions for computing the FFT using DIT algorithm
and hence draw the standard butterfly structure.
b) Compare the computational complexity of FFT and
DFT. [8+8
]
4. Discuss and draw various IIR realization structures like Direct form
– I, Direct form-II, Parallel and cascade forms for the difference
equation given y(n) = - 3/8 Y(n-1) + 3/32 y(n-2) + 1/64 y(n-3) + x(n) +
3 x(n-1) + 2 x(n-2).
5.a) Compare Butterworth and Chebyshev approximation techniques.
b) Design a Digital Butterworth LPF using Bilinear transformation
technique for the following specifications
0.707 ≤ | H(w) | ≤ 1 ;0
≤ w ≤ 0.2π
| H(w) | ≤ 0.08 ; 0.4 π ≤ w ≤ [ 8+8]
6.a) Derive the conditions to achieve Linear Phase characteristics of FIR filters
b) Design an FIR Digital Low pass filter using Hanning window whose
cut off freq is 2 rad/s and length of window
N=9. [8+8
]
[8+8]
43
Diagram.[8+8]
[8+8]
3.a) Develop DIT-FFT algorithm and draw signal flow graphs for
decomposing the
DFT for N=6 by considering the factors for N = 6 = 2.3.
b) Bring out the relationship between DFT and Z-transform.
[8+8]
[8+8]
44
whose upper and lower cut off freq.’s are 1 & 2 rad/s and length of
window N = 9.
[8+8]
[8+8]
2. (a) Design a high pass filter using hamming window with a cut-off frequency of
1.2 radians/second and N=9
(b) Compare FIR and IIR filters. [10+6]
3. (a) For each of the following systems, determine whether or not the
system is i. stable
ii. causal
iii. linear
iv. shift-invariant.
A. T [x(n)] = x(n − n0 ) B. T
[x(n)] = ex (n)
C. T[x(n)] = a x(n) + b.
45
Justify your answer.
(b) A system is described by the difference equation y(n)-y(n-1)-y(n-2) = x(n-
1). Assuming that the system is initially relaxed, determine its unit sample
response h(n).
[8+8]
[6+6+4]
46
III B.TECH - II SEMESTER EXAMINATIONS, APRIL/MAY, 2011
DIGITAL SIGNAL PROCESSING
(COMMON TO EEE, ECE, EIE, ETM, ICE)
Time: 3hours Max. Marks: 80
Answer any FIVE questions
All Questions Carry Equal Marks
47
3. (a) Describe how targets can be decided using RADAR
(b) Give an expression for the following parameters relative to
RADAR
i. Beam width
ii. Maximum unambiguous range
(c) Discuss signal processing in a RADAR system. [6+6+4]
[8+8]
A. T [x(n)]
= x(n −
n0 ) B. T
[x(n)] =
ex (n)
C. T[x(n)] =
a x(n) + b.
Justify
your
answer.
(b) A system is described by the difference equation y(n)-y(n-1)-y(n-
2) = x(n-
1). Assuming that the system is initially relaxed, determine its
unit sample response h(n).
[8+8]
7. (a) Design a high pass filter using hamming window with a cut-off
frequency of
1.2 radians/second and N=9
48
(b) Compare FIR and IIR filters.
[10+6]
1. (a) Design a high pass filter using hamming window with a cut-off
frequency of
1.2 radians/second and N=9
(b) Compare FIR and IIR filters.
[10+6]
49
(b) Give an expression for the following parameters relative to RADAR
i. Beam width
ii. Maximum unambiguous range
(c) Discuss signal processing in a RADAR system. [6+6+4]
4. (a) Compute Discrete Fourier transform of the following finite length sequence
considered to be of length N.
i. x(n) = δ(n + n0 ) where 0 < n0 < N
ii. x(n) = an where 0 < a < 1.
(b) If x(n) denotes a finite length sequence of length N, show that x((−n))N =
x((N − n))N .
[8+8]
5. (a) For each of the following systems, determine whether or not the system is i. stable
ii. causal
iii. linear
iv. shift-invariant.
[8+8]
6. (a) Discuss the frequency-domain representation of discrete-time systems and sig- nals by
deriving the necessary relation.
(b) Draw the frequency response of LSI system with impulse response
h(n) = an u(−n) (|a| < 1) [8+8]
50
8. (a) Discuss impulse invariance method of deriving IIR digital filter from corre- sponding
analog filter.
(b) Use the Bilinear transformation to convert the analog filter with system
func- tion H (S) = S + 0.1/(S + 0.1)2 + 9 into a digital IIR filters. Select
T = 0.1 and compare the location of the zeros in H(Z) with the locations of
the zeros obtained by applying the impulse invariance method in the
conversion of H(S). [8+8
51
1. (a) Compute Discrete Fourier transform of the following finite
length sequence considered to be of length N.
i. x(n) = δ(n + n0 ) where 0 < n0 < N
ii. x(n) = an where 0 < a < 1.
(b) If x(n) denotes a finite length sequence of length N, show that x((−n))N =
x((N − n))N .
[8+8]
[8+8]
[6+6+4]
4. (a) Design a high pass filter using hamming window with a cut-off frequency of
1.2 radians/second and N=9
(b) Compare FIR and IIR filters.
[10+6]
5. (a) An LTI system is described by the equation y(n)=x(n)+0.81x(n-1)-0.81x(n-
2)-0.45y(n-2). Determine the transfer function of the system. Sketch the
poles and zeroes on the Z-plane
(b) Define stable and unstable systems. Test the condition for
stability of the first-order IIR filter governed by the equation y(n)=x(n)+bx(n-1).
[8+8]
52
7. (a) Discuss impulse invariance method of deriving IIR digital filter
from corre- sponding analog filter.
(b) Use the Bilinear transformation to convert the analog filter with system function H (S)
= S + 0.1/(S + 0.1)2 + 9 into a digital IIR filters. Select T = 0.1 and compare the
location of the zeros in H(Z)with the locations of the zeros obtained by applying the
impulse invariance method in the conversion of H(S). [8+8]
2)Design an FIR digital high pass filter using hamming window whose cutoff frequency is 1.2
rad/s and length of window N=5.Compare the same using rectangular window .Draw the
frequency response curve for both the cases. (5M)
3a) Determine the order and poles of a type- I low pass Chebyshev filter that has a 1-dB ripple
in the passband, a cutoff frequency Ωp=1000π, a stopband frequency of 2000π, and an
attenuation of 40 dB or more for Ω ≥ Ωs .
(3 M)
4a) Explain how aliasing effect can be avoided while performing decimation process of a
signal by a factor of D. (3 M)
53
54
55
56
22. References, Journals, Websites and E-Links, if required.
References
Text Books
WEBSITES
1.www.google.com
2.www.dspguru.com
3.www.nptel.ac.in
4.www.nptelonlinecourses.iitm.ac.in
To be attached
To be attached
57
58