Advanced Series in Electrical and Computer Engineering — Vol, 17
MN
MNBL
APPLICATIONS
With Solved Homework Problems
Sharad R Laxpati * Vladimir Goncharoff
World ScientificPRACTICAL SIGNAL
PROCESSING AND
ITS APPLICATIONS
With Solved Homework ProblemsADVANCED SERIES IN ELECTRICAL AND COMPUTER ENGINEERING
Editors: W.-K. Chen (University of Illinois, Chicago, USA)
Y.-F. Huang (University of Notre Dame, USA)
The purpose of this series is to publish work of high quality by authors who are experts
in their respective areas of electrical and computer engineering. Each volume contains the
state-of-the-art coverage of a particular area, with emphasis throughout on practical
applications. Sufficient introductory materials will ensure that a graduate and a professional
engineer with some basic knowledge can benefit from it.
Published:
Vol. 20: Computational Methods with Applications in Bioinformatics Analysis
edited by Jeffrey J. P. Tsai and Ka-Lok Ng
Vol. 18: Broadband Matching: Theory and Implementations (Third Edition)
by Wai-Kai Chen
Vol. 17: Practical Signal Processing and Its Applications:
With Solved Homework Problems
by Sharad R Laxpati and Vladimir Goncharoff
Vol. 16: Design Techniques for Integrated CMOS Class-D Audio Amplifiers
by Adrian I. Colli-Menchi, Miguel A. Rojas-Gonzalez and
Edgar Sanchez-Sinencio
Vol. 15: Active Network Analysis: Feedback Amplifier Theory (Second Edition)
by Wai-Kai Chen (University of Illinois, Chicago, USA)
Vol. 14: Linear Parameter-Varying System Identification:
New Developments and Trends
by Paulo Lopes dos Santos, Teresa Paula Azevedo Perdicoilis,
Carlo Novara, Jose A. Ramos and Daniel E. Rivera
Vol. 13: Semiconductor Manufacturing Technology
by C. S. Yoo
Vol. 12: Protocol Conformance Testing Using Unique Input/Output Sequences
by X. Sun, C. Feng, Y. Shen and F. Lombardi
Vol. 11: Systems and Control: An Introduction to Linear, Sampled and
Nonlinear Systems
by T. Dougherty
Vol. 10: Introduction to High Power Pulse Technology
by S. T. Pai and Q. Zhang
For the complete list of titles in this series, please visit
hutp://www.worldscientific.com/series/aseceAdvanced Series in Electrical and Computer Engineering — Vol. 17
PRACTICAL SIGNAL
PROCESSING AND
ITS APPLICATIONS
With Solved Homework Problems
Sharad R Laxpati
Vladimir Goncharoff
University of Illinois at Chicago, USA
New World Scientific
NEW JERSEY + LONDON + SINGAPORE + BEIJING + SHANGHAI » HONG KONG = TAIPEI » CHENNAIPublished by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE,
Library of Congress Cataloging-in-Publication Data
‘Names: Laxpati, S. R., author. | Goncharoff, Vladimir, author.
Title: Practical signal processing and its applications : with solved homework problems /
by Sharad R. Laxpati (University of Illinois at Chicago, USA),
Vladimir Goncharoff (University of Illinois at Chicago, USA).
Description: [Hackensack] New Jersey : World Scientific, [2017] |
Series: Advanced series in electrical and computer engineering ; volume 17
Identifiers: LCCN 2017036466 | ISBN 9789813224025 (he : alk. paper)
Subjects: LCSH: Signal processing--Textbooks.
Classification: LCC TK5102.9 .L39 2017 | DDC 621.382/2--de23
LC record available at https://lecn.loc.gov/2017036466
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
Copyright © 2018 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In
is not required from the publisher.
For any available supplementary material, please visit
hitp://www.worldscientific.com/worldscibooks/10.1142/10551#=suppl
Desk Editor: Suraj Kumar
Typeset by Stallion Press
Email:
[email protected]
Printed in Singapore
case permission to photocopyDedication
We dedicate this work to our spouses, Maureen Laxpati and
Marta Goncharoff, in sincere appreciation of their love and support.This page intentionally left blankPreface
The purpose of this book is two-fold: to emphasize the similarities in the
mathematics of continuous and discrete signal processing, and as the title
suggests to include practical applications of theory presented in each
chapter. It is an enlargement of the notes we have developed over four
decades while teaching the course ‘Discrete and Continuous Signals &
Systems’ at the University of Illinois at Chicago (UIC). The textbook is
intended primarily for sophomore and junior-level students in electrical
and computer engineering, but will also be useful to engineering
professionals for its background theory and practical applications.
Students in related majors at UIC may take this course, generally during
their junior year, as a technical elective. Prerequisites are courses on
differential equations and electrical circuits, but most students in other
majors acquire sufficient background in introductory mathematics and
physics courses.
There is a plethora of texts on signal processing; some of them cover
mostly analog signals, some mostly digital signals, and others include
both digital and analog signals within each chapter or in separate
chapters. We have found that we can give students a better understanding
in less time by presenting analog and digital signal processing concepts
in parallel (students like this approach). The mathematics of digital
signal processing is not much different from the mathematics of analog
signal processing: both require an understanding of signal transforms, the
frequency domain, complex number algebra, and other useful operations.
Thus, we wrote most chapters in this textbook to emphasize parallelism
between analog and digital signal processing theories: there is aviii Preface
topic-by-topic, equation-by-equation match between digital/analog chap-
ter pairs {2, 3}, {4, 5} and {9, 10}, and a somewhat looser
correspondence between chapter pairs {7, 8} and {11, 12}. We hope that
because of this textbook organization, even when reading only the analog
or only the digital chapters of the textbook, readers will be able to
quickly locate and understand the corresponding parallel-running
descriptions in the other chapters. However, this textbook is designed to
teach students a// the material in Chapters 1-10 during a one-semester
course.
Sampling theory (Ch. 6) is presented at an early stage to explain the
close relationship between continuous- and discrete-time domains. The
Fourier series is introduced as the special case of Fourier transform
operating on a periodic waveform, and the DFT is introduced as the
special case of discrete-time Fourier transform operating on a periodic
sequence; this is a more satisfactory approach in our opinion. Chapters
{11, 12} provide useful applications of Z and Laplace transform
analysis; as time permits, the instructor may include these when covering
Chapters {9, 10}.
To maintain an uninterrupted flow of concepts, we avoid laborious
derivations without sacrificing mathematical rigor. Readers who desire
mathematical details will find them in the footnotes and in cited
reference texts. For those who wish to immediately apply what they have
learned, plenty of MATLAB® examples are given throughout. And, of
course, students will appreciate the Appendix with its 100 pages of fully-
worked-out homework problems.
This textbook provides a fresh and different approach to a first
course in signal processing at the undergraduate level. We believe its
parallel continuous-time/discrete-time approach will help students
understand and apply signal processing concepts in their further studies.Preface ix
Overview of material covered:
¢ Chapter 1: Overview of the goals, topics and tools of signal
processing.
e Chapters 2, 3: Time domain signals and their building blocks,
manipulation of signals with various time-domain operations,
using these tools to create new signals.
¢ Chapters 4, 5: Fourier transform to the frequency domain and
back to time domain, operations in one domain and their effect in
the other, justification for using the frequency domain.
¢ Chapter 6: Relationship between discrete-time and continuous-
time signals in both time and frequency domains; sampling and
reconstruction of signals.
¢ Chapters 7, 8: Time and frequency analysis of linear systems,
ideal and practical filtering.
¢ Chapters 9, 10: Generalization of the Fourier transform to the
Z/Laplace transform, and justification for doing that.
e Chapters 11, 12: Useful applications of Z/S domain signal and
system analysis.
¢ Appendix: Solved sample problems for material in each chapter.
The flowchart in Fig. 1.4, p. 13, shows this textbook’s organization
of material that makes it possible to follow either discrete- or continuous-
time signal processing, or follow each chapter in numerical sequence.
We recommend the following schedule for teaching a 15-week semester-
long university ECE course on introductory signal processing:x Preface
Chapter # lectures
1 1 Introductory lecture
Continuous-time 3 4
Continuous-time 5 6
Discrete-time 2 2
Discrete-time 4 4
both 6 4 Expand if necessary
Continuous-time 8 5
Discrete-time 7 5
Continuous-time | 10 (& 12) 5 Examples from Ch.12 as needed
Discrete-time | _9 (& 11) 5 Examples from Ch.11 as needed
4
1 lectures total
We are indebted to our UIC faculty colleagues for their comments
about and use of the manuscript in the classroom, and to the publisher’s
textbook reviewers. We also thank our many students who, over the
years, have made teaching such a rewarding profession for us, with
special thanks to those students who have offered their honest comments
for improving this textbook’s previous editions.
Sharad R. Laxpati and Vladimir GoncharoffContents
Dedicatioi
Preface...
List of Tables.
List of Figures...
Chapter 1: Introduction to Signal Processing
1.1 Analog and Digital Signal Processing.
1.2 Signals and their Usefulness
1.2.1 Radio communication:
1.2.2 Data storage.....
1.2.3 Naturally-occurring signals
1.2.4 Other signals...
1.3 Applications of Signal Processing
1.4 Signal Processing: Practical Implementatio1
1.5 Basic Signal Characteristics
1.6 Complex Numbers ....
1.6.1 Complex number math refresher ..
1.6.2 Complex number operations in MATLAB’
1.6.3 Practical applications of complex numbers
1.7 Textbook Organization................
1.8 Chapter Summary and Comments
1.9 Homework Problems...
Chapter 2: Discrete-Time Signals and Operations .
2.1 Theory.....
2.1.1 Introduction
2.1.2 Basic discrete-time signals .
2.1.2.1 Impulse function
2.1.2.2 Periodic impulse train.
2.1.2.3. Sinusoid
xiContents
A Complex exponential...
5 Unit step function
6 Signum function ..
7 Ramp function
8
9
J
1.2,
1.2.
1.2,
12:
1.2.8 Rectangular pulse
1.2.9 Triangular pulse...
1.2.10 Exponential decay .
1.2.11 Sine function.
‘ignal properties...
1.3.1 Energy and power sequences.
1.3.2 Summable sequences.
1.3.3 Periodic sequences...
1.3.4 Sum of periodic sequences ..
1.3.5 Even and odd sequences
1.3.6 Right-sided and left-sided sequence:
1.3.7 Causal, anticausal sequences......
1.3.8 Finite-length and infinite-length sequences
‘ignal operations
2.1.4.1 Time shift
2.1.4.2 Time reversal ..
2.1.4.3 Time scalin;
2.1.4.4 Cumulative sum and backward difference .
2.
2.
2
2.
2
2.
2
z
2.13 Si
2.
Z
2.
2
2.
2.
2
2.
iS
2.1.4
2.1.4.5 Conjugate, magnitude and phase 32
2.1.4.6 Equivalent signal expressions. 33
2.1.5 Discrete convolution... 34
2.1.5.1 Convolution with an impulse. 34
2.1.5.2 Convolution of two pulse:
2.1.6 Discrete-time cross-correlation
2.2 Practical Applications ...
2.2.1 Discrete convolution to calculate the coefficient values
of a polynomial product ....
2.2.2 Synthesizing a periodic signal using convolution ..
2.2.3 Normalized cross-correlation......
2.2.4 Waveform smoothing by convolving with a pulse.
2.2.5 Discrete convolution to find the Binomial distribution.
2.3 Useful MATLAB® Code.
2.3.1 Plotting a sequence
2.3.2 Calculating power of a periodic sequence.
2.3.3 Discrete convolution...
2.3.4 Moving-average smoothing of a finite-length sequence
2.3.5 Calculating energy of a finite-length sequence ...Contents xiii
2.3.6 Calculating the short-time energy of a finite-length
sequence ...
2.3.7 Cumulative sum and backward difference operations.
2.3.8 Calculating cross-correlation via convolution.
2.4 Chapter Summary and Comments ...
2.5 Homework Problems... ag
3.1.2 Basic continuous-time signal:
3.1.2.1 Impulse function..
3.1.2.2 Periodic impulse train.
3.1.2.3 Sinusoid.....
.2.4 Complex exponential
5 Unit step function...
.6 Signum function
7 Ramp function ..
8
9
1
Rectangular pulse
Triangular pulse...
0 Exponential decay
11 Sine function..
ignal properties...
.1.3.1 Energy and power signals
1.3.2 Integrable signals.
1.3.3 Periodic signal:
1.3.4 Sum of periodic signals
.1.3.5 Even and odd signals
.1.3.6 Right-sided and left-sided signals .
1.3.7 Causal, anticausal signals .....
.1.3.8 Finite-length and infinite-length signals
0
1.
ls
1.
Le
pl
le
2.
2
2
2.
2
2
2
3.1.4 Continuous-time signal operation:
4.1 Time delay
4.2 Time revers:
4.3 Time scaling
4.4 Cumulative integral and time differential.
4.5 Conjugate, magnitude and phase.
4.6 Equivalent signal expressions
peepee or eee ree ge ere eeexiv Contents
3.1.5 Convolution...
3.1.5.1 Convolution with an impulse.
3.1.5.2 Convolution of two pulses.
3.1.6 Cross-correlation ..
3.2 Practical Application:
3.2.1 Synthesizing a periodic signal using convolution ...
3.2.2 Waveform smoothing by convolving with a pulse
3.2.3 Practical analog cross-correlation.....
3.2.4 Normalized cross-correlation as a measure of similarity
3.2.5 Application of convolution to probability theory ..
3.3 Useful MATLAB® Code.
3.3.1 Plotting basic signal:
3.3.2 Estimating continuous-time convolution.
3.3.3 Estimating energy and power of a signal ..
3.3.4 Detecting pulses using normalized correlation.
3.3.5 Plotting estimated probability density functions
3.4 Chapter Summary and Comments
3.5 Homework Problems
Chapter 4: Frequency Analysis of Discrete-Time Signals..
4.1 Theory .
4.1.1 Discrete-Time Fourier Transform (DTFT).
4.1.2 Fourier transforms of basic signals..
4.1.2.1 Exponentially decaying signal.
4.1.2.2 Constant value
4.1.2.3 Impulse function
4.1.2.4 Delayed impulse function.
4.1.2.5 Signum function
4.1.2.6 Unit step function
4.1.2.7 Complex exponential function
4.1.2.8 Sinusoid...
4.1.2.9 Rectangular pulse functior
4.1.3 Fourier transform properties.
4.1.3.1 Linearity
4.1.3.2 Time shifting
4.1.3.3 Time/frequency duality
4.1.3.4 Convolution
4.1.3.5 Modulation .
4.1.3.6 Frequency shi
4.1.3.7 Time scaling ....
4.1.3.8 Parseval’s TheoremContents xv
Elle
119)
4.1.4 Graphical representation of the Fourier transform ...
4.1.4.1 Rectangular coordinates
4.1.4.2 Polar coordinates .... .120
4.1.4.3 Graphing the amplitude of F (e /”). 121
4.1.4.4 Logarithmic scales and Bode plots. 22,
123
123
4.1.5 Fourier transform of periodic sequences .
4.1.5.1 Comb function...
4.1.5.2 Periodic signals as convolution with a comb
FUNCTION... eeeeeecceeeeeeseeeeeeeeeeeeees
4.1.5.3 Discrete Fourier Transform (DFT)
4.1.5.4 Time-frequency duality of the DFT.
4.1.5.5 Fast Fourier Transform (FFT)
4.1.5.6 Parseval’s Theorem ..
4.1.6 Summary of Fourier transformations for discrete-time
signals...
4.2 Practical Application:
4.2.1 Spectral analysis using the FFT
4.2.1.1 Frequency resolution
4.2.1.2 Periodic sequence..
we 124
126
129
-130
131
. 133
134
134
134
-135
4.2.1.3 Finite-length sequenc: .137
4.2.2 Convolution using the FFT... 142
4.2.3 Autocorrelation using the FFT . 143
4.2.4 Discrete Cosine Transform (DC’ 145
4.3 Useful MATLAB® Coéde...... -146
4.3.1 Plotting the spectrum of a discrete-time signal 146
152
154
4.4 Chapter Summary and Comments
4.5 Homework Problems...
Chapter 5: Frequency Analysis of Continuous-Time Signals.........157
cn .157
Transform 157
159
5.1.2.1 Exponentially decaying signal. 159
5.1.2.2 Constant value - 160
5.1.2.3 Impulse function. 161
5.1.2.4 Delayed impulse function. 161
$.1.2.5 Signum function 162
5.1.2.6 Unit step function - 163
5.1.2.7 Complex exponential function 163
5.1.2.8 Sinusoid........... . 164
5.1.2.9 Rectangular pulse function 165Xvi Contents
5.1.3 Fourier transform properties......
5.1.3.1 Linearity
5.1.3.2 Time shiftin;
5.1.3.3 Time/frequency duality
1.3.4 Convolution...
1.3.5 Modulation .....
1.3.6 Frequency shift
1.3.7 Time scaling ...
5.1.3.8 Parseval’s Theorem ..
5.1.4 Graphical representation of the Fourier transform
5.1.4.1 Rectangular coordinates
5.1.4.2 Polar coordinates ...
5.1.4.3 Graphing the amplitude of F (@ ) .
5.1.4.4 Logarithmic scales and Bode plots.
5.1.5 Fourier transform of periodic signals .
5.1.5.1 Comb function...
5.1.5.2 Periodic signals as convolution with a comb
function
5.1.5.3 Exponential Fourier Serie:
5.1.5.4 Trigonometric Fourier Serie:
5.1.5.5 Compact Trigonometric Fourier Series
5.1.5.6 Parseval’s Theorem ..
5.1.6 Summary of Fourier transformations for continuous-time
signals
5.2 Practical Application:
5.2.1 Frequency scale of a piano keyboard ..
5.2.2 Frequency-domain loudspeaker measurement .
5.2.3 Effects of various time-domain operations on
frequency magnitude and phase ......
5.2.4 Communication by frequency shifting
5.2.5 Spectral analysis using time windowing
5.2.6 Representing an analog signal with frequency-domain
samples......
5.3 Useful MATLAB® Code
5.4 Chapter Summary and Comments
5.5 Homework Problems...
S
5.
5.
S
Chapter 6: Sampling Theory and Practice
6.1.1 Sampling a continuous-time signal .
6.1.2 Relation between CTFT and DTFT based on sampling.Contents xvii
6.1.3 Recovering a continuous-time signal from its samples ......224
6.1.3.1 Filtering basics ...
6.1.3.2 Frequency domain perspective
6.1.3.3 Time domain perspective .....
6.1.4 Oversampling to simplify reconstruction filtering
6.1.5 Eliminating aliasing distortion ..
6.1.5.1 Anti-alias post-filtering
6.1.5.2 Anti-alias pre-filterin;
6.1.6 Sampling bandpass signals ..
6.1.7 Approximate reconstruction of a continuous-
from its samples...
6.1.7.1 Zero-order hold method.
6.1.7.2 First-order hold method
6.1.8 Digital-to-analog conversion
6.1.9 Analog-to-digital conversion.
6.1.10 Amplitude quantization .
6.1.10.1 Definition...
6.1.10.2 Why quantize’
6.1.10.3 Signal to quantization noise power ratio (
6.1.10.4 Non-uniform quantization .
6.2 Practical Applications ..
6.2.1 Practical digital-to-analog conversion
6.2.2 Practical analog-to-digital conversion
6.2.2.1 Successive approximation AD\
6.2.2.2 Logarithmic successive approximation AD‘
6.2.2.3 Flash ADC
6.2.2.4 Delta-Sigma (42)
6.2.3 Useful MATLAB® Code
6.2.3.1 Amplitude quantization ..
6.3 Chapter Summary and Comments
6.4 Homework Problems...
Chapter 7: Frequency Analysis of Discrete-Time Systems............
7.1 Theory
7.1.1 Introductior
7.1.2 Linear shift-invariant discrete-time system
7.1.2.1 Impulse response
7.1.2.2 Input/output relations.
7.1.3 Digital filtering concepts...
7.1.3.1 Ideal lowpass filter
7.1.3.2 Ideal highpass filteXViii Contents
7.1.3.3 Ideal bandpass filter...
7.1.3.4 Ideal band-elimination filter
7.1.4 Discrete-time filter networks...
7.1.4.1 Digital filter building blocks
7.1.4.2 Linear difference equations .
7.1.4.3 Basic feedback network........
7.1.4.4 Generalized feedback network
7.1.4.5 Generalized feed-forward network
7.1.4.6 Combined feedback and feed-forward network .
7.2 Practical Applications
7.2.1 First-order digital filters
7.2.1.1 Lowpass filter.
7.2.1.2 Highpass filter
7.2.2 Second-order digital filters
7.2.2.1 Bandpass filter
7.2.2.2 Notch filter .
7.2.2.3 Allpass filter
7.2.3 Specialized digital filters
7.2.3.1 Comb filter ......
7.2.3.2 Linear-phase filte
7.2.4 Interpolation and Decimation
7.2.4.1 Interpolation by factor a .
7.2.4.2 Decimation by factor b.
7.2.5 Nyquist frequency response plot ..
7.3 Useful MATLAB® Code
7.3.1 Plotting frequency response of filter described
by a difference equation ...
7.3.2 FIR filter design by windowing the ideal filter’s impulse |
response............
7.3.3 FIR filter design by frequency sampling
7.4 Chapter Summary and Comments
7.5 Homework Problems.......
Chapter 8: Frequency Analysis of Continuous-Time Systems..
8.1.2 Linear Time-Invariant Continuous System
8.1.2.1 Input/output relation .
8.1.2.2 Response to e/?0!Contents xix
8.1.3 Ideal filters...
8.1.3.1 Ideal lowpass filter
8.1.3.2 Ideal highpass filtes
8.1.3.3 Ideal bandpass filter
8.1.3.4 Ideal band-elimination filter
8.2 Practical Applications
8.2.1 RLC circuit impedance analysi
8.2.2 First order passive filter circuit:
8.2.2.1 Highpass filter ........
8.2.3 Second order passive filter circuits.
8.2.3.1 Bandpass filter...
8.2.3.2 Band-elimination filter .
8.2.4 Active filter circuits...
8.2.4.1 Basic feedback network
8.2.4.2 Operational amplifier.
8.2.4.3 Noninverting topolog'
8.2.4.4 Inverting topology .
8.2.4.5 First-order active filte
8.2.4.6 Second-order active fil
8.3 Useful MATLAB® Code......
8.3.1 Sallen-Key circuit frequency response plot
8.3.2 Calculating and plotting impedance of a one-port
network.....
8.4 Chapter Summary and Comments
8.5 Homework Problems...
Chapter 9: Z-Domain Signal Processing..
9.1 Theory...
9.1.1 Introductior
9.1.2 The Z transform
9.1.3 Region of convergenc:
9.1.4 Z transforms of basic signals
9.1.4.1 Exponentially decaying signal.
9.1.4.2 Impulse sequence... 375
9.1.4.3 Delayed impulse sequenc pao!
9.1.4.4 Unit step sequence... 375
9.1.4.5 Causal complex exponential sequence .. 376
9.1.4.6 Causal sinusoidal sequence . 376
9.1.4.7 Discrete ramp sequence
9.1.5 Table of Z transforms.........Xx
Chapter 10: S-Domain Signal Processing
10.1 Theory...
10.1.1 Introduction
10.1.2 Laplace transform
10.1.3 Region of convergence
9.1.6 Z transform properties...
7 Table of Z transform properties.
8 Z transform of linear difference equations
9 Inverse Z transform of rational function:
9.2 Chapter Summary and Comments
9.3 Homework Problems.......
10.1.5 Table of Laplace transform:
10.1.6 Laplace transform properties...
Contents
9.1.6.1 Linearity
9.1.6.2 Time shiftin;
9.1.6.3 Convolution
9.1.6.4 Time multiplication
9.1.6.5 Conjugation ..... soeeesseeesneeenee
9.1.6.6 Multiplication by 7 in the time domain
9.1.6.7 Multiplication by a” in the time domain. 386
9.1.6.8 Backward difference.................:00ss
9.1.6.9 Cumulative sum.
9.1.9.1 Inverse Z transform yielding finite-length
sequences...
9.1.9.2 Long division method.
9.1.9.3 Partial fraction expansion method .
10.1.4 Laplace transforms of basic signal
10.1.4.1 Exponentially decaying signal 404
10.1.4.2 Impulse function....... 405
10.1.4.3 Delayed impulse functio:
10.1.4.4 Unit step function .....
10.1.4.5 Complex exponential function .
10.1.4.6 Sinusoid.....
10.1.4.7 Ramp functiot
10.1.6.1 Linearity .....
10.1.6.2 Time shifting
10.1.6.3 Frequency shifting duality .
10.1.6.4 Time scaling ..
10.1.6.5 Convolution
10.1.6.6 Time multiplication .Contents xxi
10.1.6.7 Time differentiation...
10.1.6.8 Time integration
10.1.7 Table of Laplace transform properties
10.1.8 Inverse Laplace transform of rational functions
10.1.8.1 Partial fraction expansion method...
10.2 Chapter Summary and Comments
10.3 Homework Problems...
we Al7
-418
Chapter 11: Applications of Z-Domain Signal Processing, 431
11.1 Introduction...
11.2 Applications of Pole-Zero Analysis .
11.2.1 Poles and zeros of realizable system:
11.2.2 Frequency response from H (2)...
11.2.3 Frequency response from pole/zero locations
11.2.3.1 Magnitude response.
11.2.3.2 Phase response...
11.2.4 Effect on H (e/® ) of reciprocating a pole.
11.2.5 System stability ...
11.2.5.1 Causal systems ..
11.2.5.2 Anticausal system:
11.2.5.3 Stabilizing an unstable causal system
11.2.6 Pole-zero plots of basic digital filter:
11.2.6.1 Lowpass filter
11.2.6.2 Highpass filtes
11.2.6.3 Bandpass digital filter.
11.2.6.4 Notch filter
11.2.6.5 Comb filter
11.2.6.6 Allpass filter (real pole and zero) ..
11.2.6.7 Allpass filter (complex conjugate poles
and zeros) ...
11.2.7 Minimum-phase syste: “451
11.2.8 Digital filter design based on analog prototype: 451
11.2.8.1 Impulse-invariant transformation . 452
11.2.8.2 Bilinear transformation.
11.3 Chapter Summary and Comments
11.4 Homework Problems...
Chapter 12: Applications of S-Domain Signal Processing
12.1 Introduction...xxii Contents
12.2 Linear System Analysis in the S-Domain
12.2.1 Linear time-invariant continuous system
12.2.2 Frequency response from H (s)
12.3 Applications of Pole-Zero Analysis ..
12.3.1 Poles and zeros of realizable system:
12.3.2 Frequency response from pole/zero locations
12.3.2.1 Magnitude response.
12.3.2.2 Phase response...
12.3.3 Effect on H (w) of mirroring a pole about the jw axi
12.3.4 System stability ...
12.3.4.1 Causal systems ..
12.3.4.2 Anticausal system:
12.3.4.3 Stabilizing an unstable causal syste:
12.3.5 Pole-zero plots of basic analog filters ..
12.3.5.1 Lowpass filter
12.3.5.2 Highpass filtes
12.3.5.3 Bandpass filtet
12.3.5.4 Notch (band-elimination) filter.
12.3.6 Minimum-phase system...
12.4 Circuit Analysis in the S-Domain
12.4.1 Transient Circuit Analysi:
12.4.2 Passive ladder analysis using T matrices.
12.5 Solution of Linear Differential Equations......
12.6 Relation Between Transfer Function, Differential Equation,
and State Equation ..
12.6.1 Differential equation from H (s).
12.6.2 State equations from H (s).
12.7 Chapter Summary and Comments
12.8 Homework Problems. cs
501
502
506
Appendix: Solved Homework Problems
Bibliography .
Index.....Table 4.1.
Table 4.2.
Table 4.3.
Table 5.1.
Table 5.2.
Table 5.3.
Table 8.1.
Table 9.1.
Table 9.2.
Table 9.3.
List of Tables
Table of discrete-time Fourier transform pairs ..- 103
Table of discrete-time Fourier transform properties ...........115
Summary of Fourier transformations for discrete-time
signals. ...
Table of continuous-time Fourier transform pairs ............. 165
Table of continuous-time Fourier transform properties ...... 107,
Summary of Fourier transformations for
continuous-time signals......
Voltage-current characteristics of R,L,C components
in time and frequency domains ......
Regions of convergence for the Z transforms of various
types of sequences......
Table of Z transform pairs.
(region of convergence is for a causal time signal)............
379
Table of Z transform properties .............cc.cssseeessseesesseeeess
xxiiixxiv List of Tables
Table 10.1. Table of Laplace transform pairs.
(region of convergence is for a causal time signal)............ 409
Table 10.2. Table of Laplace transform properties...
Table 12.1. V-I characteristic of R, L, and C in time and s-domains....486Fig. 1.1.
Fig. 1.2
Fig. 1.3.
Fig. 1.4.
Fig. 2.1.
Fig. 2.2.
Fig. 2.3.
Fig. 2.4.
Fig. 2.5.
Fig. 2.6.
Fig. 2.7.
Fig. 2.8.
Fig. 2.9.
List of Figures
Shown is a 50-millisecond span of a continuous-time
speech signal. Its nearly-periodic nature is the result
of vocal cord vibrations during vowel sounds...
A discrete-time signal obtained by sampling a sine
wave...
Phasor diagram graphical solution for 2 cos(100r + 45°) +
3 sin(100/-90°) = 2.1 cos(100r + 138°) (12
Textbook chapter organization, showing the parallelism
between discrete-time and continuous-time domains ....... 13
Example of a sequence x(n) as a function of its index
variable n.....
Impulse sequence 6(n)...
Delayed impulse sequence 6 (n — 4).....
Impulse train 54(n)....
Impulse train 53(n)...
Impulse train 52(n)....
A sinusoidal sequence (A = 1, @ = 1, 6 = 77/3)...
Unit step function sequence u(n).....
Signum function sequence sgn(n)....
xxv,Xxvi
Fig. 2.10.
Fig. 2.11.
Fig. 2.12.
Fig. 2.13.
Fig. 2.14.
Fig. 2.15.
Fig. 2.16.
Fig. 2.17.
Fig. 2.18.
Fig. 2.19.
Fig. 2.20.
Fig. 2.21.
Fig. 2.22.
Fig. 2.23.
Fig. 2.24.
List of Figures
Ramp function sequence r(7).....
Rectangular pulse sequence rect,(n)......
Triangular pulse sequence As(7)....
Exponentially decaying sequence u(n) (0.8)" ....
Discrete-time sequence sinc (7).....
Pulse rects (n + 1), a time-shifted version of sequence
recta(7) ...
Pulse rects (n —2), a time-shifted version of sequence
Tects(n) ....
Delayed impulse sequence 6(n — 4) ..
Delayed exponentially decaying sequence ....
Sequence u (—n—2)...
Rectangular pulse sequence x(n)...
Signal y (”), composed of noise plus rectangular pulses
at various delays and amplitudes ...
Normalized cross-correlation C,,(n) between x(n) and
y(n). Notice that rectangular pulses in y(n) (Fig. 2.20)
were detected as peaks of the triangular pulses .. .
Short-time normalized cross-correlation STC,(”)
between x(n) and y (n). Rectangular pulses in y (n) were
detected as locations where |STC,,(n)| = 1... see
MATLAB® plot of a Binomial(50, 0.5) distribution ........ 43Fig. 2.25.
Fig. 2.26.
Fig. 2.27.
Fig. 2.28.
Fig. 2.29.
Fig. 2.30.
Fig. 3.1.
Fig. 3.2.
Fig. 3.3.
Fig. 3.4,
Fig. 3.5
Fig. 3.6.
Fig. 3.7.
Fig. 3.8.
Fig. 3.9.
Fig. 3.10.
List of Figures xxvii
An example of using MATLAB’s stem function to plot
a sequence wee
A noisy sinusoidal sequence before smoothing...
A noisy sinusoidal sequence after smoothing.....
Sequence x(n).....
Calculated short-time energy of the sequence x(n) in
Fig. 2.27
Autocorrelation of random noise
Impulse function 6(¢ ).
Shifted impulse function d(t + 7).
Multiplying d(¢— tf) by signal x(¢ ) gives the same
product as does multiplying 6(t— to) by the constant
c=X(h)....
Impulse train d42(¢). (When not specified, assume
each impulse area = 1.)....
A sinusoidal signal (A = 1, @ = 1,0 = 2/3)
Unit step function u(¢ )....
Signum function sgn(¢ )
Ramp function r (¢)....
Rectangular pulse function rect(t )...
Triangular pulse function A(t ).XXViii
Fig. 3.11.
Fig. 3.12.
Fig. 3.13.
Fig. 3.14.
Fig. 3.15.
Fig, 3.16.
Fig. 3.17.
Fig. 3.18.
Fig. 3.19.
Fig. 3.20.
Fig. 3.21.
Fig. 3.22.
Fig, 3.23.
Fig. 3.24.
Fig. 3.25.
Fig. 3.26.
List of Figures
Exponentially decaying signal u (t )e°??"... 61
Function sinc(t)
Signal rect(t— 1), which is rect(¢ ) after 1-sec delay
Signal rect(¢ + 1/2), which is rect(t ) after 1/2-sec
advance
Delayed impulse function d(t — 4).
Delayed exponentially decaying signal ....
Signal u ((-t) — 2) ..sseesccsssesscsssseeesessseessssseesssseeessssseessess
Triangular pulse function A(t), before (dotted line) and
after (solid line) smoothing via convolution with pulse
Srect(5¢)..
Practical analog cross-correlation technique
MATLAB® plot of y(t) = Sin(2at) ......eeeesescceee
MATLAB? plot of
y(t) =u (t+1.5)+rect(t/2)+ A(t) -u(t-155) ....
MATLAB® plot of 2rect(#) convolved with A(r—1)......... 81
Original signal y(¢ ) that is composed of three triangular
pulses. ....... .
Triangular pulse x(¢ ) used for waveform matching.
Signal z(t) = y(t) + noise added
Normalized cross-correlation result C,,(¢), showing
locations and polarities of triangular pulses that were
detected in the noise waveform z(t )......Fig. 3.27.
Fig. 3.28.
Fig. 3.29.
Fig. 4.1.
Fig. 4.2.
Fig. 43
Fig. 4.4.
Fig. 4.5
Fig. 4.6.
Fig. 4.7.
Fig. 4.8.
Fig. 4.9.
Fig. 4.10.
Fig. 4.11,
Fig. 4.12.
Fig. 4.13.
Fig. 4.14.
List of Figures xxix
Estimated PDF of r.v. X.
Estimated PDF of r.v. Y
Estimated PDF of random variable Z = X + Y,
demonstrating the fact that f,(a)= f,(a)*/, (a)...
Sequence x(n) to be transformed to the frequency
domain in Example 4.1.......
106
From Example 4.1: F{x(n)} = X(e/”
Lia1 cos(kw) + Lf tg cos(ko )....
244
107
The spectrum X(e/”) = F{x(n)} in Example 4.2............. 110
Plot of F {rect;9(n)} = 1+ 2E}°, cos(kw J...
Plot of F {rectio(n) cos((x/6)n)} = 1+
ye (cos (Ko -1/6)) + cos(k(w + 2/6))
A graph of 7{sinc(3.5@) * 5z_(w) }e 1 vs. O/7 119
A 3-D graph of complex-valued F(e/”) ... 119
Re{F(e/”)} vs. w, corresponding to Fig. 4.7...
120
Im{F(e/”)} vs. w, corresponding to Fig. 4.7...
A graph of (1 + 2 D2_, cos(ka))e %| vs. w... 12
A graph of 2{(1 + 2D2-, cos(ko) Je 7} vs. @ oo... 121
A graph of 2(1 +2 D%- cos(kw)) vs. w.... 122
A graph of 1 + 2yR-1 cos(kw) vs. @ 122
Impulse train 54(7Fig. 4.15.
Fig. 4.16.
Fig. 4.17.
Fig. 4.18,
Fig. 4.19.
Fig. 4.20.
Fig. 4.21.
Fig. 4.22.
Fig. 4.23.
Fig. 4.24.
Fig. 4.25.
Fig. 4.26.
Fig. 4.27.
List of Figures
Impulse train (# /2)65, ;.(@) = F {64()}....
Rectangular pulse rects(v )....
Impulse train 59(v )...
Periodic signal f,( ) = rects(n ) * do9(n)
Discrete Fourier Transform spectrum for periodic
signal f, (1 ) = rect, (1) * dao)
(1/20) SE 9 Fy ® 5 i(k 2/20)n
A plot of periodic discrete-time sequence x(n )
cos(2mn/10) .
- 136
A plot of the spectrum of x(n ) = cos(2an/10),
calculated using the Fast Fourier Transform (FFT)
Samples of |X(e/)| = |F {x(n)}| found using the FFT
method, when x(n ) = {0.0975, 0.2785, 0.5469, 0.9575,
0.9649} for 0
0.....xiviii
Fig. 12.24.
Fig. 12.25.
Fig. 12.26.
Fig. 12.27.
Fig, 12.28.
List of Figures
S-domain description of the circuit in Fig. 12.23
(Example 12.9). ... 492,
Circuit in Example 12.10... 493
Circuit in Example 12.10, transformed to the s-domain. . 493
Two-port network. Note that, by convention, currents
are considered positive when entering the (+) terminal
teach pOte
Passive ladder network in Example 12.1Chapter 1
Introduction to Signal Processing
1.1 Analog and Digital Signal Processing
‘Welcome to your first course on signal processing! Electromagnetic signal
waveforms are the communication medium of today’s fast-paced
interconnected world, and the information they convey may be represented
in either analog or digital form. In analog form, a signal is a continuously
varying waveform, usually a function of time. The name analog is used
because the waveform is analogous to some physical parameter: e.g.,
instantaneous pressure, velocity, light intensity or temperature. Analog
signals are usually the most direct measure of an actual event in nature.
Signals in analog form may be processed using analog electronic circuitry,
and for this reason analog signal processing is a topic firmly rooted in the
electrical engineering field. The term digital refers to another form of
signals — those that may be represented using lists of numbers. Advances
in computer engineering have revolutionized signal processing by making
it possible to replace analog circuitry with hardware that performs
calculations in real-time. Digital signal processing is accomplished by a
computer program whose algorithm operates on one list of numbers to
produce another. However, the mathematics of digital signal processing
is not radically different from the mathematics of analog signal processing:
they both require an understanding of signal transforms, the frequency
domain, complex number algebra and other useful operations. For that
reason, most chapters in this textbook emphasize the parallelism between
analog and digital signal processing theories: there is a topic-by-topic,
equation-by-equation correspondence between digital-analog chapter
pairs {2, 3}, {4, 5} and {9, 10}, and somewhat looser correspondence2 Practical Signal Processing and its Applications
between chapter pairs {7, 8} and {11, 12}. Thus, when reading an analog
chapter, you will be able to quickly locate and understand a parallel-
running description in the corresponding digital chapter (and vice versa).
1.2 Signals and their Usefulness
Signals convey information. Just as sailors once signaled between ships
using flags, a time-varying electrical signal may be used to represent
information and transfer it between electronic devices (and people).
Consider the following applications of using signals:
1.2.1 Radio communications
Radio waves are perturbations in the electromagnetic field that can travel
through some materials at nearly the speed of light. These signals make
possible today’s vital wireless communications technologies: emergency
services, personal mobile telephone and data use, broadcast radio and
television, RFID* and Bluetooth®, to name a few.
1.2.2 Data storage
At times, we wish to communicate via signals that do not require real-time
transmission. Such signals are designed to be efficiently and reliably
stored until they are needed. On-demand communications signals include
audio and video recordings, electronic textbooks, photographs, web pages,
banking records, etc.
1.2.3 Naturally-occurring signals
Not all signals are man-made. For example, electrical signals generated
by the human heart are used by doctors to diagnose cardiac diseases. Light
from distant stars, or the vibration of the ground during an earthquake, are
* RFID: Radio Frequency Identification, such as that embedded in some credit cards for
data and power transfer over short distances.Introduction to Signal Processing 3
examples of naturally-generated signals. Weather reports often include
the analysis and prediction of temperature, humidity and wind speed
signals.
1.2.4 Other signals
As electrical and computer engineering students, you are aware of circuit
voltage and current waveforms as descriptions of a circuit’s operation.
The prices of stocks at a stock exchange vs. time are another example of
signals that are useful in our daily lives. Security video cameras generate
surveillance images that are displayed, stored, and perhaps later analyzed.
1.3 Applications of Signal Processing
Now that you know what signals are, and how these signals may be useful
for us, we will mention a few operations that are commonly done on
signals; that is, what the applications are of signal processing:
© Modulation/demodulation — placing/removing information onto/from a
carrier signal when communicating over longer distances, typically
using wireless signals;
e Telemetry and navigation — sensing specialized reference signals to
determine one’s location, and determining what path to take to a desired
destination;
© Compression — reducing the amount of redundant information, with or
without degradation, so that a signal may be more efficiently stored or
transmitted;
e Enhancement — operating on a signal to improve its perceived quality
(as in audiovisual signals) or another characteristic that is deemed
desirable;4 Practical Signal Processing and its Applications
e Filtering — blocking/attenuating some signal components while pass-
ing/boosting other signal components, to achieve some useful purpose;
¢ Coding — converting a signal to a different format so that it may be
more immune to interference, or better suited for storage or trans-
mission;
e Encryption — converting a signal to a different format so that the
information it conveys may be hidden from those not authorized to
receive it;
e Feature Extraction — identifying or estimating a desired representative
signal component;
¢ Control — generating and injecting a signal to properly guide a system’s
operation.
1.4 Signal Processing: Practical Implementation
Analog signals are processed using analog circuits, which include these
basic building blocks: input transducer to detect a physical parameter and
convert it to a representative electrical signal, amplifier to boost the signal
level, adder/subtractor, multiplier, filter to alter the signal’s frequency
content, output transducer to convert the electrical signal to a physical
parameter, and a power supply to provide the electrical energy required by
these blocks to properly function. Within each analog circuit building
block one may find operational amplifiers, transistors, diodes, resistors,
capacitors and inductors. Usually the simplest signal processing tasks are
done using analog electronic circuits, because that approach is least
expensive to implement.
Digital signals are either in the form of binary control signals, such as
the output of an on/off switch, or in the form of binary symbol groups
representing numerical codes. In the case of numerical codes, these are
processed using programs running on digital computers designedIntroduction to Signal Processing 5
specifically for the task at hand. For example: a mobile phone digitizes
the speaker’s voice to produce a stream of binary numbers; specialized
hardware then extracts perceptually-important parameters from this
stream that are compressed and coded for transmission. Finally, the coded
signal modulates a radio frequency carrier wave so that it is efficiently and
reliably transmitted to the nearest cell tower. The entire process requires
both digital signal processing algorithms executed on a computer, as well
as radio-frequency analog circuitry. Thus, analog and digital processing
both play a role in mobile telephone communications.
1.5 Basic Signal Characteristics
When we analyze signals that are functions of time, then time is the
independent variable. The analog waveform in Fig. 1.1, for example,
represents pressure variations in air when a person is speaking. This type
of signal is defined at all instants of time, and will from now on be referred
to as a continuous-time signal. In the previous paragraph, we had referred
to digital signals representing numerical codes. These may be thought of
as sequences of numbers? that are operated on by digital computers. To
make it possible for these lists of numbers to be stored in digital computer
0 5 io 15 20 25 30 35 40 A550)
Figure 1.1. Shown is a 50-millisecond span of a continuous-time speech signal. Its nearly-
periodic nature is the result of vocal cord vibrations during vowel sounds.
> These are usually represented in base 2 (binary), composed of symbols | and 0.6 Practical Signal Processing and its Applications
memory, the lists must be of finite length and each number in the list must
have a finite number of bits® (having finite precision). For academic
purposes, when analyzing digital signals, it is convenient to remove the
finite-list-length and finite-precision restrictions. When such a signal is a
function of time, then it is called a discrete-time signal.‘ For example, Fig.
1.2 shows the discrete-time signal points obtained by sampling a sine wave
at uniform increments of time.
tl hy a hy
0 5 10 15 20 25 30
Figure 1.2. A discrete-time signal obtained by sampling a sine wave.
Of course, time is not always the independent variable. In image signal
processing, each color image pixel is represented by an intensity values of
red, green and blue light components as a function of spatial location.
Thus, the independent variables may be {x, y} in rectangular coordinates,
or {r, 9} using polar coordinates. Finally, the signal values themselves may
be either real or complex. Complex-valued signals are extremely useful
as a mathematical tool for compactly representing a transformed version
of a signal in an intermediate domain. For this reason, the remainder of
this introductory chapter provides the reader with a refresher of complex
numbers and their operations.
© bit = binary digit
4 Discrete-time signals are discrete in time, but continuous in amplitude (having infinite
precision). When the amplitudes of discrete-time signals are finite-length binary
codewords, then these are called digital signals.Introduction to Signal Processing 7
1.6 Complex Numbers
1.6.1 Complex number math refresher
Define imaginary number j = v-1 (electrical engineers use j instead of
i, since the symbol i is used to represent current). Let c = a + jb, where
{a, b} are real numbers. This is called the rectangular or Cartesian form
of c. Then:
(a) the real part of c is: Re{c} =a
(b) the imaginary part of c is: Im{c} = b
(c) the complex conjugate of c is: c* = Re{c} — jIm{c} = a — jb
(d) the magnitude squared of c is: |c|? =c c* = a? + b?
(e) the magnitude of c is: |c| = Ve c* = Va? + b?
(f) the phase angle of c is: Zc or arg(c) =Tan™1(b/a) (see* below)
(g) the polar form of c is: ¢ = |cle/4©
(Engineers often write c = |c|Zc as shorthand for ¢ = |cle/)
Polar form of e/*: je#|=1, ze? =0: ce! =120
Rectangular form of e/9: e*/® = cos(6) + j sin(@)
(Euler’s formula)
F F : 1yj of
Cosine and sine expressions: cos(@) = c (e? +e 30)
sin(@) = ye” — eI?)
Define complex numbers c, = a, + jb, = rye/9 = 1,204, C, = a, +
jb = r2e/% = 1,202. Offsets of m are added to phase angles 6, and 6,
to ensure that r, and r2 are positive real values. Then:
(a) cy¢2 = ryr2e/*) = ry72(8, + 2)
(b) cyte, = (ay + ay) + f(b, + be)
© In order for |cle/© =c, Zc = Tan™*(b/a) must be calculated to give results over
range (—1,71] (or some other range spanning 27 radians). To do this the sign of each a
and b must be considered.8 Practical Signal Processing and its Applications
() leyeel = leqlleal = m7 (in general, |e, + cz # lea] + Ical)
(d) L(cycz) = 2c, + 2cz = 0, +42 (in general, 2(cy+ez) # Zc; + 2e2)
(©) ler/col = leql/le2l = 4/72
(f) L(Ci/ez) = 4¢, — 42 = 81 — 82
(8) (Cxe2)* = cje3
(h) (4 +e2)" = cp + 3
1.6.2. Complex number operations in MATLAB®
In most chapters, we provide examples of MATLAB® code that are useful
for demonstrating the theory being presented. MATLAB® (MATrix
LABoratory)' has become the world’s de facto standard programming
environment for engineering education, and it is especially well-suited for
signal processing computations. Numerous online tutorials and textbooks
are available for students who are new to MATLAB.® Here are
MATLAB? operations that implement calculations on complex numbers:
a) finding the real part of c: real (c)
b) finding the imaginary part of c: imag(c)
c) finding the complex conjugate of c: conj (c)
d) finding complex conjugate transpose of c: ce (see 8)
e) finding the magnitude of c: abs (c)
f) finding the phase angle of c: angle (c) (see *)
g) finding e/?: exp (j*theta)
h) finding Az6: Ax*exp (j*theta)
1.6.3 Practical applications of complex numbers
Complex numbers are often used as a means to an end; for example, when
factoring a real-valued cubic polynomial one may need to perform some
* MATLABS is a proprietary product from MathWorks, Inc. Many universities purchase
site licenses for their students, and a relatively inexpensive student version is also available.
Or, one may use a free clone such as FreeMat for most operations.
® This is useful for vectors and matrices. When ¢ is a scalar, ¢' = cong (c).
* Function angle retums the principal value of phase angle (within the interval [—7, n])Introduction to Signal Processing 9
intermediate calculations in the complex domain before arriving at the
roots (even when they are purely real!). However, by introducing some
extra special operations, any numerical calculations done with complex
numbers may also be done with two sets of real numbers. For example,
the product of two complex numbers (a + jb) and (c + jd) is the complex
result (e + jf), where the two real numbers {e, f} at the output are related
to the four real numbers {a, b,c, d} at the input using the operations: e =
ac — bd, f = ad + bc (this is how digital computers perform complex
multiplication). Using complex numbers makes the notation more
compact than with real numbers, while keeping basically the same
algebraic rules and operations.
Because a complex number has real and imaginary parts, complex
number notation is also useful for compactly representing signals having
two independent components. For example, representing the location of
point A on a plane at coordinates (x4,y4) as the complex number c4 =
xX, + jy, makes the representation of location both more compact and
simpler to manipulate; this is evident when calculating the distance
between points A and B as |c, — cg|. However, the main application of
complex numbers in this textbook is dealing with sinusoidal signals. To
show this we begin with the Taylor series for e*, which is valid for any x
(real or complex):
x? x8 xt xs xe x? xe
ealtxt ote Se ete te tot ote | (1)
Next, for some real constant 8, we let x = j@ so that it is imaginary:
e et oe 08 . e os 7
=F 5 Sta He St gat jor
e/® = cos(@) + j sin(), (1.2)
as expressed by the Maclaurin series for cos(@) and sin(@). As @
increases, e/® describes a circular path about the origin in a counter-
clockwise direction when plotted on a complex plane: Im{e/®} vs.10 Practical Signal Processing and its Applications
Re{e/®}. The real component of e/® is cos(@) and the imaginary
component of e/® is sin(@); this relationship between circular motion and
sinusoidal waveforms is at the heart of trigonometry.
The expression e/° = cos(@) + j sin(6) is a special case of Euler’s
Formula,’ which relates trigonometric functions cos(@) and sin(@) to
complex numbers. For example, with this formula it is easy to show that:
cos(@) =4e/8 +4¢e—J8 (1.3)
sin(9) = Fe! —FeV?. (1.4)
To represent a real sinusoidal signal in time having constant amplitude
A, constant frequency w, and constant phase @, one may multiply by A
and replace 6 with wt + :
Acos(wt + $) = Seiartd) 4.4 e-iarto)
= feiateib + . en iat ei
= Celt + Celt)’ = 2Re{Ce/*}, (1.5)
where complex constant C = Ae/®/2. We see that once frequency w is
known, only the complex constant C need be specified to completely
describe any real sinusoidal function of time. More commonly we use
2C = Ae/?, which is called a “phasor” (phase-vector), to compactly
describe Acos(wt +@)/ Given phasor Ae/%, the corresponding
sinusoidal signal in time may be found as:
Ref{Ae!? - e/t} = Acos(wt + ¢). (1.6)
With this background, we are ready to see the benefits of using
complex phasor notation to represent real sinusoidal time signals. First,
consider the sum of two such sinusoids having the same frequency w:
‘Recommended book on the topic: An Imaginary Tale: The Story of V=1 by Paul J. Nahin,
Princeton University Press, 2010.
i Together with knowledge of constant frequency value w.Introduction to Signal Processing i
x(t) = A, cos(wt + p1) + Az cos(wt + ¢2). (1.7)
It is known that the sum of two sinusoids at the same frequency yields
a pure sinusoid at that frequency, albeit having different amplitude and
phase: x(t) = A3 cos(wt + $3). The values of parameters Az and $3
may be found using various trigonometric identities and tedious
manipulations. However, the calculations are much simpler when using
complex numbers and phasors:
x(t) = Re{A,e/*reJt} + RefA,elbzeiot}
= Re{Ae/*re/t + Anelbzeiot}
= Re{(Aye/*: + Aye/z eiot}
= Re{(A3e/4s)e/”*}, or
Agelos = Ayes? + Arel oe, (1.8)
Therefore: A = |A3e/%3| = |A,e/* + A,e/?2|,
3 = 2{Ayel?1 + Azelo2}, and
x(t) = |A,e/# + Aze/#2| cos(wt + 2{A,e/%:+A,e/42}),
(1.9)
These solutions for Az and @3 are easily done graphically on a phasor
diagram, as shown in Fig. 1.3. Another application of complex numbers to
sinusoidal signal analysis is this: as we will see in later chapters, when a
time-invariant linear system has an input signal x(t) that is a sinusoid, the
output signal y(t) will be a sinusoid at the same frequency (but with
possibly different amplitude and phase). How may we model this process?
As it turns out, the transformation from A, cos(wt + x) to Ay cos(wt +
y) is easily done using multiplication in the phasor domain:
‘Ayeloy
Axel ox
Ayelty = Ayelet | = Ayeibx (tx ellty-#s)}
A
= A,e/$x{H}, (1.10)12 Practical Signal Processing and its Applications
—sin
(w = 100 rad/sec)
Figure 1.3. Phasor diagram graphical solution for 2.cos(100t + 45°) + 3sin(100¢ —
90°) = 2.1 cos(100t + 138°).
where complex constant H = (Ay/Ax) e/(%y-#). The output signal y(t)
may then be found as Re{Aye/*= « He/#*}.
1.7. Textbook Organization
This textbook presents its information by chapter pairs: one is digital and
the other is analog. Once a concept is understood in one domain then it is
much easier to comprehend in the other domain. We recommend that the
reader begin study of signals in the continuous-time domain with Chapters
3 & 5, followed by Chapters 2 & 4 for a review of the same operations and
transforms on discrete-time signals. Chapter 6 relates the continuous- and
discrete-time domains using the theory of sampling, which explains the
parallelism between analog and digital signal processing. The analog-
digital learning approach continues by studying continuous-time systems
and the Laplace transform in Chapters 8 & 10, followed by discrete-time
systems and the Z transform in Chapters 7 & 9. Finally, to whet the
reader’s appetite for further study in the field of signal processing,
Chapters 11 & 12 describe some practical applications of the Z and
Laplace transforms.Introduction to Signal Processing 13,
Discrete Time Domain
v
Chapter I:
Introduction to Signal
Processing
Chapter 2:
Discrete-Time Signals
and Operations
Chapter 4:
Frequency Analysis
of Discrete-Time Signals
synchronized
topics
Continuous Time Domain
Vv
Chapter 3:
Continuous-Time Signals
and Operations
Chapter 7:
Frequency Analysis
of Discrete-Time Systems
Chapter 9:
Z-Domain
Signal Processing
Chapter 11:
Applications of Z-Domain
Signal Processing
Chapter 5:
synchronized Frequency Analysis
topies of Continuous-Time
Signals
Chapter 6:
Sampling Theory
and Practice
Chapter 8:
related
topics
synchronized
topies
related
topics
Frequency Analysis
of Continuous-Time
Systems.
Chapter 10:
$-Domain
Signal Processing
Chapter 12:
Applications of S-Domain
Signal Processing
Appendix:
Solved Homework
Problems
Figure 1.4. Textbook chapter organization, showing the parallelism between discrete-
time and continuous-time domains.14
Practical Signal Processing and its Applications
1.8 Chapter Summary and Comments
Signals are waveforms that represent information, and are normally
expressed as functions of time in electrical and computer engineering
applications.
Analog signals are electrical representations of (are analogous to)
waveforms originally found in other forms, such as pressure or
temperature. They fall in the category of “continuous-time” signals,
which are mathematical functions of time having a unique value at
each time instant. The continuous-time signal amplitudes are infinite-
precision real or complex.
Digital signals are electrical representations of number sequences.
Often the code used for this purpose is binary. Digital signals fall in
the category of “discrete-time” signals, which are mathematical
functions of time defined only at specific, typically uniformly-spaced,
instants of time. At each of those times their amplitude is not
constrained to the binary number system; as for continuous-time
signals, discrete-time signal values may be infinite-precision real or
complex.
Among the many useful applications of signal processing are com-
municating and storing information. Wireless transmission of signals
is possible using electromagnetic waves. These waves travel at the
speed of light in the vacuum of free space.
Analog electronic circuits are used to process analog signals, whereas
digital computers and their programs (algorithms) are used to process
digital signals.
One way to obtain a digital signal is to sample an analog signal. This
is called analog-to-digital conversion. Digital-to-analog conversion
does the opposite.Introduction to Signal Processing 15
Analog circuits are often the simplest and least expensive signal
processing tools. This is because most signals that we need to process
naturally occur in analog form, and the result of signal processing is
typically also analog. Circuits process electrical waveforms directly,
without the need for the intermediate steps of analog-to-digital and/or
digital-to-analog conversion.
The digital revolution has replaced many analog circuits with digital
computers. Digital computers are advantageous because of their
potentially smaller size, lower sensitivity to noise, greater function-
ality, and capability to easily change function (programmability).
Analog and digital signal processing are based on similar theories and
mathematics; these include calculus, differential and difference equa-
tions, and complex numbers.
This textbook teaches introductory analog and digital signal
processing concepts in parallel; but for Chapters 1 and 6, all chapters
come in pairs to present similar concepts for discrete-time and
continuous-time signal processing.
1.9 Homework Problems
P1.1 Label each of the following signals as being “continuous-time” or
“discrete-time”:
a) the current temperature in downtown Chicago
b) the world’s population at any given time
c) the average number of automobiles using the interstate
highway system on any day of the year
d) the number of automobiles using the interstate highway system
in the time period: (one hour ago) < t < (now)
d) a list describing the results of flipping a coin 100 times
e) athermostat output signal to turn an air conditioner on/off
f) voltage across an inductor being measured 200 times per secP12
P13
P14
PLS
P16
Practical Signal Processing and its Applications
List four applications of signal processing that affect you daily.
What is an advantage of processing signals in analog format
(instead of digital)?
What is an advantage of processing signals in digital format
(instead of analog)?
Solve the following without using a calculator:
a) Find the real part of e~/7/2
b) Find the imaginary part of —e/*/?
c) Find the magnitude of 1 + 3e/7/2
d) Find the phase of —5 in radians
e) Find the phase of 1—j in degrees
f) Express —1 + 2) in polar form
g) Express —V2e/"/* in rectangular form
h) Express 2e/*/3 — 1 in rectangular form
i) Find (c/234/2)(ei234/ 2)"
j) Express e° in polar form, in terms of a = Re{c} and
b = Im{c}
Solve the following using MATLAB® (show your code and the
result):
a) Find the real part of me/™/6
b) Find the imaginary part of 7.1222°
c) Find the magnitude of —4.5 + j1.52
d) Find the magnitude of e@?
e) Find the phase of 1 + 3/ in radians
f) Find the phase of 1 + 3j in degrees
g) Express 2—-V—2 in polar form
h) Express —2e~/°™ in rectangular form
i) Express e~/*/7 — 7/7 in rectangular form
j) Express (2 + 3j)*(1 — 5j)* in rectangular formChapter 2
Discrete-Time Signals and
Operations
2.1 Theory
2.1.1 Introduction
Consider x(n), defined to be an ordered sequence of numbers, where
index n is an integer in the range —c0 to +00. The purpose of 7 is to keep
track of the relative ordering of values in sequence x. If we associate a
specific time value with n, such as nT seconds, then sequence x(n)
becomes a discrete-time signal: x(n) represents a number that is
associated with the time instant npT sec. The word discrete is used
because x is defined at only discrete values of time.
no... 2 = 0 1 2 3 4
x(n): ... 634 3.21 -23.3 35.8 0.95 -1.83 19.7
Figure 2.1. Example ofa sequence x(n) as a function of its index variable n.
In some cases, a sequence such as x(n) is obtained by sampling a
continuous-time signal x(t) at uniformly-spaced increments of time:
x(n) = x(t)lt=nr sec- For this reason, the properties of continuous-time
signal x(t) are closely reflected by the properties of sequence x(n). In
fact, it may even be possible to recover x(t), at all values of t, from its
discrete-time samples; this is the topic of Ch. 6. But even when x(n) is
not derived from sampling a continuous-time signal, similarities in18 Practical Signal Processing and its Applications
continuous-time and discrete-time signal processing exist and should not
be ignored. Chapter 3 may be referred to for a topic-by-topic, equation-
by-equation presentation of material in the continuous time domain that is
synchronized to this chapter.
2.1.2 Basic discrete-time signals
2.1.2.1 Impulse function
The most basic discrete-time signal is the impulse, or Kronecker Delta
function:
1, n=0;
0,n#0. @1*
b(n) = {
Whenever the argument of function 5(-) is equal to zero, the result is
1; otherwise, the function produces 0. As an example, 5(n — 2) is equal
to 1 atn = 2; itis equal to0 atn # 2. We say that 5(n — 2) is an impulse
function shifted right by two samples. Figure 2.2 shows the impulse se-
quence 6(n) without any shift, and Fig. 2.3 shows 6(n — 4) that is 5(n)
shifted to the right by 4 samples:
+ + ° . . + >n
Oe et Oe tee 0 2 4 6 8 10
Figure 2.2. Impulse sequence 6(n)
1
Looe eee re a
“10-8 -6 4 -2 0 2 4 6 8 10
Figure 2.3. Delayed impulse sequence 6(n — 4)
* A more general definition of Kronecker Delta is as a function of two integers:
6, = 0 # f) or 1 (i = j). Our d(x) is therefore equal to yoDiscrete-Time Signals and Operations 19
Now we introduce the sifting, or sampling property of a Kronecker
impulse function:
Ln=-00 X(N)4(n — Ng) = x(No). (2.2)
The sifting property concept is this: because 5(n—1g) =0
everywhere except at n = ng, only the value of x(n) at n = ng is what
matters in the product term x(n)5(n — 1g). Therefore, x(n) is sampled
at n = No to give x(n)d(n — no) = x(Np)d(n — No), which is an impulse
function scaled by the constant x(n). The resulting scaling factor of the
impulse function has been obtained by sifting the value x(n)|,=n, out
from the entire sequence of numbers that is called “x(n)”.”
2.1.2.2 Periodic impulse train
It is useful to form a periodic signal by repeating an impulse every M
samples:
Sy (1) = DE--co 5(n — kM). (2.3)
Because the impulses in the plot look like the teeth of a comb, an
impulse train is also referred to as a comb function. The shortest period
that 5y,(n) can practically have is one sample, but 6,(n) = 1 Wn is the
same as a constant sequence.
| | |
0 n
9 -6 3 0 3 6 9
Figure 2.5. Impulse train 5,(n)
> We will use the terms discrete-time signal, function and sequence interchangeably in this
text.20 Practical Signal Processing and its Applications
le ° °
0<_—e . . ° oe . « °
“10 6 4 2 0 2 4 6 8 10
Figure 2.6. Impulse train 5,(n)
2.1.2.3 Sinusoid
The discrete-time version of a sinusoid is defined as:
f(n) = Acos(wn + 8). (2.4)
0 oe i
Figure 2.7. A sinusoidal sequence (A = 1, w = 1, @ = 1/3).
This cosine sequence has amplitude A, radian frequency w and phase
6. Note that although the underlying continuous sinusoid (dashed line in
Fig. 2.7) is periodic in time with period 27/w sec, the discrete-time
sinusoid f (n) in Eq. (2.4) may not be periodic! As will be discussed later
a property of periodic sequence f, (7) is that
fon) = fpr N). (2.5)
(where N is some integer). For the discrete-time sinusoid A cos(wn + @)
to be periodic, 27/w must be a rational number.°
© The period is then the smallest integer N>0 that is a multiple of 2n/w:
Acos(wn + 8) = Acos(w(n + N) + 8) = Acos(wn + k2n + 8), or N = k(2n/w)Discrete-Time Signals and Operations 21
2.1.2.4 Complex exponential
The complex exponential sequence is useful as an eigenfunction® for linear
system analysis:
g(n) = eon, (2.6)
By invoking Euler’s formula, e/? = cos() + j sin(), we see that
the real and imaginary components of the complex exponential sequence
are discrete-time sinusoids: Re{e/#"} = cos(wn), Im{e/#"} = sin(wn).
2.1.2.5 Unit step function
The discrete-time unit step function u(n) is defined as:
_ 0, n <0;
CO) hy aaa 7)
iT °
Qe eee reece ve >n
-10 8 6 4 2 0 a 4 6 8 10
Figure 2.8. Unit step function sequence u(n).
2.1.2.6 Signum function
The discrete-time signum* function sgn(n) is defined as:
-11n<
sgn(n) -| 0, n (2.8)
1, n>0.
4 When the input to a linear system is an eigenfunction, the system output is the same signal
only multiplied by a constant.
© signum is the word for “sign” in Latin.22 Practical Signal Processing and its Applications
| |
0 °
Figure 2.9. Signum function sequence sgn(n)
2.1.2.7 Ramp function
The discrete-time ramp function r(n) is defined as:
rm =O nS. 29)
10
°
sy A
oo >
-10 38 6 4 2 0 2 4 6 8 10
Figure 2.10. Ramp function sequence (7)
2.1.2.8 Rectangular pulse
The discrete-time rectangular pulse function rect, (n) is defined as:
rect, (n)
{ In| < K; (2.10)
0, elsewhere.
Using this definition rect, (n) always has an odd‘ number (= 2K + 1)
of nonzero samples, and it is centered at index n = 0.
To obtain a rectangular pulse that is an even number of samples wide, one may add an
impulse function on one side or the other of rect, (n)Discrete-Time Signals and Operations 2B
1 ee °
Qe ee eee L eee ee eon
10 8 6 4 2 0 2 4 6 8 10
Figure 2.11. Rectangular pulse sequence rect,(n)
2.1.2.9 Triangular pulse
Define the discrete-time triangular pulse function A, (n) to be:
1-|n/K|, In| —oo. Define® the
exponentially decaying sequence = u(n)a", which is shown in Fig. 2.13.
8 An equivalent expression for the decaying exponential signal u(n)a” is u(n)e~8", where
b= -In(a) > 024 Practical Signal Processing and its Applications
2 >n
$43 210123 10 11 12 13 14 15
oes [Utteeee.
2.3.4 5.66 7, 39
Figure 2.13. Exponentially decaying sequence u(n)(0.8)"
2.1.2.11 Since function
The sinc function appears often in signal processing, and we will define"
itas sinc(@) = sin(p)/@. Note! that sinc(0) = 1. The discrete-time sinc
function is a sampling of sinc(t) at uniform increments of time: t = nAt
(n is an integer, At is real). For example, when At = 1.0 second:
sinc(n) = sin(n)/n (2.12)
Figure 2.14. Discrete-time sequence sinc(1).
» Some authors define sinc(x) = sin(x)/nx, which equals zero at integer values of x
MATLAB® has a built-in sinc function defined that way.
' Because one cannot directly evaluate sinc(0) = sin(0)/0 = 0/0, apply I'Hospital’s rule:
dim sin(G)/o = lim (a sin()/49)/(4b/d9) = lim cos($)/1 = 1.Discrete-Time Signals and Operations 25
2.1.3 Signal properties
2.1.3.1 Energy and power sequences
The energy of a discrete-time signal is defined as the sum of its magnitude-
squared values:
Ex = Date-oolx(n) |? = Vito x(n)x*(n). (2.13)
Clearly, E,, is always real and non-negative. The energy value is a
measure of how much work this signal’ would do if applied as a voltage
across a 10 resistor. Signals having finite energy are called energy
signals. Examples of discrete-time energy signals are 5(n), rect,(n),
Ax(n), u(n)a” (with Ja] < 1)‘, and sinc(n).
For periodic and some other infinite-duration sequences the summation
result of Eq. (2.13) will be infinite: E, = 00. In that case a more
meaningful measure may be power, which is the average energy per
sample of the sequence x(n):
Jim (SE; EM -wlx(m)I?)
Mo \2M+1 47=—M
dim, Gan Din a(x"). @.14)
Signals having finite, nonzero power are called power signals.
Examples of discrete-time power signals are u(n), sgn(n) and e/®",
whose power values are {1/2,1,1} respectively. Power signals have
infinite energy, and energy signals have zero power. Some discrete signals
fall in neither category, such as the ramp sequence r(n) that has both
infinite energy and infinite power.' Periodic sequences (Sec. 2.1.3.3) have
power = (Energy in one period) / (number of samples in one period). This
may be expressed as Py, = aon x» (n)|", where periodic x,(n) has N
iIn this case, the signal x(t) that is represented by its samples x(n) = x(€)|t=nr,-
« Proof that u(n)a” is an energy signal: Energy{u(n)a"} = D2-gla"|? = E2-o(a2)" =
sa < ©, when Ja] <1
" Such sequences may still play a useful role in signal processing, as we shall see in Ch. 9.26 Practical Signal Processing and its Applications
samples per period. Based on this relation one may show that the power of
periodic sinusoidal sequence A cos(2n/N + 8) is |A|?/2 when N > 2.™
2.1.3.2 Summable sequences
A sequence x(n) is said to be absolutely summable when:
n=—oolX(M)| = Lipo ¥X(n)x*(n) < o. (2.15)
A sequence x(n) is said to be square-summable when:
x(n)? = Lic x(n) x*(N) = Ey < %. (2.16)
We see, therefore, that square-summable sequences are energy signals.
2.1.3.3 Periodic sequences
A sequence x(n) is periodic when there are nonzero integers m for which
the following is true:
Xp(n) = x(n +m). (2.17)
The fundamental period of xp(n) is the smallest positive integer
m = Mo that makes Eq. (2.17) true. A periodic sequence has the property
that x(n) = xp(n + kmp), for any integer k.
The fundamental frequency, in repetitions or cycles per second (Hertz),
of a periodic discrete-time signal is the reciprocal of its fundamental
period: fy = 1/mp Hz (assuming samples in sequence xp(m) are spaced
at 1 sample/sec). Frequency may also be measured in radians per second,
as we do in this textbook, which is defined as w = 27f." Thus, the
fundamental frequency wo = 27/mg rad/sec.
™ This also holds for sinusoidal sequences that are not periodic: A cos(wn + 8) for any
w # 2n/k (kET).
"The term radians is derived from radius of a circle. When a wheel having radius r = 1
rolls along the ground for one revolution, the horizontal distance travelled is equal to the
wheel’s circumference c = 27 (or 21 radii). Thus f = 1 cycle per second correspondsDiscrete-Time Signals and Operations 27
2.1.3.4 Sum of periodic sequences
The sum of two or more periodic sequences is also periodic due to their
discrete-time nature.° These periodic additive components are harmonica-
lly related: that is, the frequency of each component is an integer multiple
of fundamental frequency wo. The fundamental period of the sum of
harmonically-related signals having fundamental frequency wW9 is Mp =
21/ Wo samples.
For example: x(n) = cos(5mn) + sin(1.57n) is periodic, having fun-
damental frequency wo = 0.57 rad/sec? and period mp = 4 samples,
whereas y(n) = cos(5n) + sin(zn) is not periodic because one of its
components (cos(5n)) is not periodic.
Each additive term in a harmonically-related sum of sinusoids having
frequency w = k wo is identified by its harmonic index: k = w/w . Thus,
in the example above, the term cos(57) in x(n) is at the 10" harmonic
of Wo = 0.57 rad/sec.!
2.1.3.5 Even and odd sequences
A sequence x(n) is even if it is unchanged after time reversal (replacing n
with —n):
to w = 2n(1) = 2m radians/sec. We use rad/sec measure to describe frequency since it
simplifies notation somewhat (e.g., cos(wn) as compared to cos(2mfn)), but in practice
engineers almost always specify frequency as f in cycles per second, or Hertz (Hz).
© For example, the sum of sequence xp;(m) having period m, samples, and sequence
Xpe(n) having period m, samples, will be a periodic sequence having period m,m,
samples or less.
Notice that even though the fundamental frequency of x(n) is 0.51 rad/sec there is no
individual component having that frequency.
4 The fundamental frequency is the 1 harmonic of a periodic signal, although it is rarely
referred to as such.28 Practical Signal Processing and its Applications
Xe(n) = xe(—n) (2.18)
A sequence x(n) is odd if it is negated by time reversal:
Xo(n) = —xX_(—n). (2.19)
Every sequence may be expressed as a sum of its even and odd
components:
y(n) = Ye(n) + yo(n), (2.20)
where Ye(n) = Hy(n) + y(-n)), (2.21)
and Yo(n) = 3(y(n) — y(-n)). (2.22)
Signals that are even described in this chapter include 6(n), 5x(n),
cos(wn), rect, (n), Ax(n) and sinc(n). Some odd signals are sin(wn)
and sgn(n). Signals u(n), r(n), e/”", and u(n)a” are neither even nor
odd.
2.1.3.6 Right-sided and left-sided sequences
If x(n) = 0 for n < m, where m is some finite index value, then x(n) is
a right-sided sequence. For example, u(n + 3) is right-sided. If x(n) =
0 for n > m, where m is some finite integer, then x(n) is a left-sided
sequence. Sequence cos(n)u(—n) is left-sided.
2.1.3.7 Causal, anticausal sequences
A sequence x(n) is causal if x(n) = 0 for alln < 0.5 A sequence x(n) is
anticausal if x(n) = 0 for all > 0. Both causal and anticausal sequen-
ces may have a nonzero value at n = 0, hence only one sequence is both
* A consequence of Eq. (2.19) is that every odd sequence has xq (0) = 0, because only zero
has the property that 0 = —0.
5 The term causal comes from the behavior of real-world systems. If in response to an
impulse at n = 0 a system outputted a signal prior to n = 0, it would be basing its output
on the knowledge of a future input. Nature’s cause-and-effect relationship is that aDiscrete-Time Signals and Operations 29
causal and anticausal: 6(n) x constant. Every sequence may be expressed
as the sum of causal and anticausal components, as for example: x(n) =
x(n)u(n) + x(n)(1 — u(n)). Causal sequences are a subclass of right-
sided sequences, and anticausal sequences are a subclass of left-sided
sequences. Causal sequences described in this chapter are 5(n), u(n),
r(n), and u(n)a". Time-reversing a causal sequence makes it anticausal,
and vice versa, so that 5(—n) = 6(n), u(—n), r(—n), and u(—n)a™ are
all anticausal sequences.
2.1.3.8 Finite-length and infinite-length sequences
If x(n) = 0 for |n| > M, where M is some finite positive integer, then
x(n) is a finite-length sequence. When no such value of M exists then
x(n) is an infinite-length sequence. Finite-length sequences are both
right-sided and left-sided. Finite-length signals described in this chapter
are 6(n), recty(n) and Ag(n). The signals 5x(n), A cos(wn + 4),
sinc(n), sgn(n), u(n), r(n), e/””, and u(n)a” all have infinite length.
2.1.4 Signal operations
2.1.4.1 Time shift
When sequence x(n) is shifted to the right by m samples to give
x(n —m), we call this an m-sample time delay (since the index values n
and m are most often associated with time). On the other hand, a shift to
the left is called a time advance.
Figures 2.15 and 2.16 demonstrate these concepts. Note that the end-
points of rect,(n) are at sample indexes n = {—4, 4}, while the endpoints
of rect,(n — 2) are atn — 2 = {—4, 4} and the endpoints of rect,(n + 1)
are atn + 1 = {—4, 4}.
response at present time can only be caused by past and present stimuli. When driven by
a causal input sequence (such as an impulse located at n = 0), real-world systems will
produce a causal output sequence.30 Practical Signal Processing and its Applications
LT ee
-10 -5 0 3
Figure 2.15. Pulse rect,(n + 1), a time-shifted version of sequence rect,(n).
1 ° ° ° °
2 samples
delay
. oe ee on
9 6 10
or ry
-2 0
Figure 2.16. Pulse rect4(n — 2), a time-shifted version of sequence rect(n).
In Fig. 2.17, note that time-delayed impulse 6(n — 4) is located at the
index n = 4 (6(n) is located atn = 0, 5(n — 4) is located at n — 4 = 0):
ke
Qe ee ewer eee eevee
“10-8 4 A 2 0 2 4 6 8
Figure 2.17. Delayed impulse sequence 5(n — 4).
When, in a functional expression for sequence x(n), every occurrence
of index n is replaced by n — m, the result is that x(n) gets delayed by m
samples. This is demonstrated in Fig. 2.18:
1
08> u(n—1)(0.8)%-Y
13 15
Figure 2.18. Delayed exponentially decaying sequence.Discrete-Time Signals and Operations 31
2.1.4.2 Time reversal
Replacing every occurrence of n with - n in an expression for x(n) causes
the sequence to flip about n = 0 (only x(0) stays where it originally was).
Time-reversing a causal sequence makes it anticausal, and vice-versa.
Time-reversing an even sequence has no effect, while time-reversing an
odd sequence negates the original. Here is what happens when the
sequence u(n — 2) is reversed in time:
1
a
S10 eS coe ta eo eee cee ee Ou ane TO
Figure 2.19. Sequence u(—n — 2).
Figure 2.19 demonstrates an interesting (and often confusing) concept:
the sequence shown could either be a delayed unit step function that was
then time-reversed, or a time-reversed unit step function that was then
time-advanced:
u((-n) — 2) = u(-(@ + 2)). (2.23)
2.1.4.3. Time scaling
Time scaling of discrete-time signals is done either by keeping every kt
sample in a sequence and discarding the rest (called “down-sampling by
factor k”), or by inserting k — 1 zeros between samples in a sequence
(called “up-sampling by factor k”). The two operations have the effect of
compressing the signal toward index 0, or expanding the signal away from
index 0, respectively (k > 0). There is usually loss of information when
down-sampling a sequence, so that x(n) cannot be reconstructed from its
down-sampled version x(kn). On the other hand, a sequence that had
been upsampled by factor k can be exactly recovered using the operation
of down-sampling by factor k. To summarize, down-sampling x(n) by
factor k to obtain sequence y(n) is:
y(n) = x(nk). (2.24)32 Practical Signal Processing and its Applications
Up-sampling x(n) by factor k to obtain sequence y(n) is:
‘x(n/k), n/k EZ;
yen) =f 0, n/kez. (2.25)
2.1.4.4 Cumulative sum and backward difference
The cumulative sum of discrete-time signal x(n) is itself a function of
independent time index n:
Cumulative sum of x(n) = Yy=-co x(k). (2.26)
Here are some sequences stated in terms of the cumulative sum
operation:
u(n) = Lee-00 5(k), (2.27)
r(n+1) = YR_. uth), (2.28)
recty(n) = DR o{6(k+K)-6(k-(K+1))}. 2.29)
Related to the cumulative sum is the backward difference operator:
A{x(n)} = x(n) — x(n - 1). (2.30)
Here are sequences stated in terms of the backward difference
operation:
5(n) = Afu(n)}, (2.31)
u(n- 1) = Afr(n)}, (2.32)
5(n + K) — 6(n- (K + 1)) = Afrecty(n)}. (2.33)
2.1.4.5 Conjugate, magnitude and phase
The conjugate, magnitude and phase of signal g(n) are found according
to the normal rules of complex number algebra:'
‘Note that the operation is performed on the entire sequence g(n)Discrete-Time Signals and Operations
9 (n) = Re{g(n)} — j Im{g@)},
lg@)| = JRefg)}¥ + Imfg™}¥ = Jama"),
Im{g(n))
2Zg(n) = Tan™* (eon.
2.1.4.6 Equivalent signal expressions
33
(2.34)
(2.35)
(2.36)"
Basic signals presented in the previous section may related to one another
via algebraic operations, or combined to give other signals of interest. For
example, here are different ways of describing the rectangular pulse and
signum functions’:
rect,(n) = u(n +k) —u(n-(k + 1),
rect,(n) = u(n+ k)u(—n + k),
sgn(n) = 2u(n) — 1— 6(n),
sgn(n) = u(n) — u(—n).
(2.37)
(2.38)
(2.39)
(2.40)
In fact, scaled and delayed versions of sequence 5(n) will add to form
any sequence x(n):
x(n) = +--+ x(-2)6(n — (—2))
+x(-1)6(n - (-D))
+ x(0)8(n — (0))
+ x(1)6(n — (1))
+x(2)6(n-(2)) +,
or x(n) = DP x(kK)5(n — bd.
(2.41)
« Evaluate the Tan“? function to produce a phase angle in all four quadrants; this is done
by noting the signs of imaginary and real components of g() prior to division.
Y Plot these to convince yourself that the expressions are equivalent!34 Practical Signal Processing and its Applications
2.1.5 Discrete convolution
One of the most useful operations for signal processing and linear system
analysis is the convolution operation,” defined here for discrete-time
signals:
Cae x(n) * ¥(M) = Leo xO y(n — ke) (2.42)
Convolution
By replacing n — k with m, we see that an equivalent form of this sum-
mation is
Le co y(m)x(n — m) = y(n) * x(n) (2.43)
Thus, the convolution operation is commutative:
x(n) * y(n) = y(n) * x(n) (2.44)
2.1.5.1 Convolution with an impulse
Let’s see what happens when we convolve signal x(n) with the impulse
function 5(n):
x(n) * 5(n) = Leno x(k) 5(n — k)
= Lie-e x(n) d(m — k)
= x(n) De__« O(n — k) = x(n). (2.45)
This result, x(n) * 6(n) = x(n), shows us that convolving with an
impulse function is an identity operation in the realm of convolution.
More interesting and useful for signal processing, however, are the
following relations that are also easily derived:
x(n) * 6(n — my) = x(n— 1). (2.46)
™ The convolution operation is also called a convolution product because of its similarity
to multiplicative product notation.Discrete-Time Signals and Operations 35
x(n) * 5(n + nz) = x(n + ng). (2.47)
Thus, a signal may be delayed or advanced in time by convolving it
with a time-shifted impulse function. As will be discussed in Ch. 7,
convolution plays a critical role in describing the behavior of linear shift-
invariant systems.
2.1.5.2 Convolution of two pulses
Define “pulse” p(n) to be a finite-length sequence whose time span is
ny, nq (n> nz +n4). Thus
x(n) = 0 outside of ny +nz3 Sn fy = sum(abs(y) .*2). (2.64)
2.3.6 Calculating the short-time energy of a finite-length sequence
A useful measure of a signal’s average energy over a short time span, as it
varies depending on what time span is selected, is found by moving-
average-smoothing |y(n)|?:
STEy(n) = y(n) |? * <4. rect, (n). (2.65)
aA
In MATLAB¥ this operation is implemented as follows:
M = 2*K+1; % M is an odd integer
STEy = conv(abs(y) .*2,ones(1,M) /M) ;
Example 2.5
% MATLAB code example Ch2-5
% Estimating short-time energy of a sequence:Discrete-Time Signals and Operations 47
n = -30:30;
x = cos(n*pi/15) + rand(size(n));
figure
stem(n,x,'filled')
xlabel('time index value n')
ylabel ('amplitude')
title('Original sequence x(n) :')
Original sequence x(n}
“lll lis, cl
rT wy
I
30
time index value n
Figure 2.28. Sequence x(n).
% MATLAB code example Ch2-5, cont.
K = 5; M= 2*K + 1;
STEx = conv (abs (x) .*2,ones(1,M) /M) ;
Figure; stem(n,STEx(K+(1:length(n))),'filled')
xlabel('time index value n')
ylabel ('energy')
title ('Short-time energy of x(n):')
‘Short-time energy of xin)
energy
Py
il
10 0 30
—ad
ee
——
ee
=e
—
ws
20 “10
time index value n
Figure 2.29. Calculated short-time energy of the sequence x(n) in Fig. 2.27.
2.3.7 Cumulative sum and backward difference operations
MATLAB® has built-in functions to calculate both the cumulative sum
and backward difference operations:48 Practical Signal Processing and its Applications
Cumulative sum of x(n):
Dh=-co x(k) > cumsum(x). (2.66)
Backward difference of y(n):
y(n) -y(n-1) > ditety). (2.67)
2.3.8 Calculating cross-correlation via convolution
Discrete-time cross-correlation is useful for pattern matching and signal
detection:
dxy(n) = x*(—n) * y(n). (2.68)
In MATLAB® the cross-correlation between two finite-length signals
is efficiently calculated via convolution:
phi_xy = conv(conj (£lipud(x(:))) ,y);
Similarly, the autocorrelation of finite-length sequence x(n) is found
as:
phi_xx = conv(conj (flipud(x(:))) ,*) 7
Example 2.6
In this MATLAB® example we generate sequence x(n) whose samples
are uniformly randomly distributed over amplitude range [—0.5, 0.5], and
calculate its autocorrelation $,,(n). The result is a scaled delta function
located at n = 0, along with some noise that disappears as N > 00:
% MATLAB code example Ch2-6
% Calculating autocorrelation of a sequence:
N= le4;
x = rand(1,N+1) - 0.5;
phi_xx = conv(conj(flipud(x(:))),x);
plot (-N:N,phi_xx)Discrete-Time Signals and Operations 49
900
() ——--—-_meeannenantanitnnannehineraenenepianpnqnanpienriamramatemmnre— |:
-108 0 10°
Figure 2.30. Autocorrelation of random noise.
2.4 Chapter Summary and Comments
e Discrete-time signals, also known as sequences, are defined as lists of
numbers that are ordered by time index values. In this sense a discrete-
time signal is only defined at specific time instants. One way to think
of it is to imagine that a discrete-time signal is a list of amplitude values
that are obtained by sampling a continuous-time signal at uniformly-
spaced times (e.g., at integer values of t).
e In this chapter, we define some basic sequences that may be used as
building blocks to synthesize more complicated sequences.
¢ Operations such as addition, multiplication, convolution, time shift,
time reversal, backward time difference, etc., are useful for
manipulating and combining discrete-time signals. In later chapters,
we will make use of these operations for “signal processing.”
e Almost every discrete-time signal and operation has a continuous-time
domain equivalent. These relationships are evident in the parallel
presentations in Chapters 2 and 3, and may be explained using the
theory of sampling (Ch. 6).50
Practical Signal Processing and its Applications
2.5 Homework Problems
P2.1
P2.2
P2.3
P2.4
P25
P2.6
P2.7
P2.8
P29
P2.10
P2.11
P2.12
P2.13
P2.14
Simplify: Y--o O(n) =
Simplify: Yw=-co O(n)x(n) =
Simplify: 5(n)x(n) =
Simplify: 6(n — 1)x(n) =
Simplify: 5(n)x(n — 1) =
Simplify: L7=-co 5(n + 1)x(n) =
Simplify: Y22_,. 5(n)x(n + 1) =
Simplify: Lre—co 5(n — 1)x(n +1) =
Simplify: De-. 6(k — n)x(n) =
Given sequence x(n) = n rect3(n — 3):
a) Find x(4)
b) Find x(—4)
c) Find w(10) when w(n) = x(n) * 6,(n)
d) Find y(—10) when y(n) = DZ-9x(k)5;(n — k)
Is x(n) = sin(5n) periodic? If so, what is its period?
Is y(n) = cos(1n/8) periodic? If so, what is its period?
Express y(n) = 4 cos(5n) as a sum of complex exponential
sequences.
State in terms of an impulse function:
u(n)u(—n) =P2.15
P2.16
P2.17
P2.17
P2.18
P2.19
P2.20
P2.21
P2.22
P2.23
P2.24
P2.25
P2.26
P2.27
P2.28
Discrete-Time Signals and Operations 51
State as a sum of one or more impulse functions:
u(n+ 1Du(-n-1) =
State in terms the sum of one or more impulse functions:
u(n-1)-u(n-2)=
State in terms of the sum of two unit step functions:
sgn(n) =
State in terms of the sum of two unit step functions:
rect,(n) =
State in terms of a rectangular pulse function:
u(n—-5)-—u(n) =
State in terms of a triangular pulse function:
r(n- 3) -—2r(n) +r(n+ 3) =
State in terms of a triangular pulse function:
r(n) — 2r(n+2)+r(n+4) =
Calculate the energy of f(n) = 4(2-")u(n).
Calculate the energy of f(n) = 4(2-")(u(n) — u(n — 100)).
Simplify: sinc(n) =
Given: g(n) = sinc(mn/2). Calculate g(n) for-5 t
Figure 3.1. Impulse function 6()
f:
>t
-10 8 6 4m -2 0 2 4 6 8 10
Figure 3.2. Shifted impulse function 5(¢ + 7)
* In terms of the rectangular pulse rect(t) that is defined later in this chapter, 6(t) =
Jim (1/T) rect(¢/T). Note that (1/T) rect(t/T) has area equal to one, regardless of the
value of positive real constant T.
» Strictly speaking, the Dirac impulse is not a function (for the reason that, in spite of being
equal to zero Vt except at t = 0, it has nonzero area). Instead 5(t) is a mathematical object
called a “generalized function,” since it has function-like behavior when interacting with
other signals.Continuous-Time Signals and Operations 37
The location of an impulse function in time is the value of t that makes
the argument of 6(-) equal zero. Thus, 5(t — to) is located at t = to.
Figure 3.2 shows 5(t + 7), which is 5(t) shifted to the left by 2 seconds.
In Figs. 3.1 and 3.2, both impulses have area equal to 1. Time shift
does not change an impulse function’s area (, (t —to)dt = 1), it
only moves it to a different point in time. As with any function, to change
the area of an impulse we multiply it by a constant. Thus the area of c5(t)
is equal to ce, c5(t) dt = cf” S(t) dt =c.
Now we introduce the most important property of a Dirac impulse
function, called the “sifting” or “sampling” property:
2, x(t)8(t — to) dt = x(t). _
The sifting property concept is this: because 5(t — to) = 0 everywhere
except at t = to, only the value of x(t) at t = to is what matters in the
product term x(t)d(t — to). Therefore x(t) is sampled at t = to to give
x(t)8(t — to) = x(to)5(t — to), which is an impulse function with area =
X(to). The constant x(t) has been obtained by sampling, or sifting it out
from the waveform that is called “x(t)”. Figure 3.3 illustrates why
x(t)8(t — to) = x(to)d(t — to):
>t
ty
Figure 3.3. Multiplying 6(¢—t)) by signal x(t) gives the same product as does
multiplying 6(¢ — ty) by the constant c = x(t).
3.1.2.2 Periodic impulse train
It is useful to form a periodic signal by repeating an impulse every T
seconds:
Sp (t) = Die co S(t — kT). G.3)58 Practical Signal Processing and its Applications
Because the impulses in the plot look like the teeth of a comb, an
impulse train is also referred to as a comb function. Time span T, the
period of 5;(t), may be any positive real number.
-8.4 42 0 42 84
me
Figure 3.4. Impulse train 5,.2(t). (When not specified, assume each impulse area = 1.)
3.1.2.3 Sinusoid
The sinusoidal signal is defined as:
f(t) = Acos(wt + 6) (3.4)
Figure 3.5. A sinusoidal signal (A = 1,w = 1,8 = 7/3).
This cosine signal has amplitude A, radian frequency w and phase 6. The
sinusoidal signal is periodic in time with period Ty = 27/w seconds. As
will be discussed later, periodic signal f,(t) with period Ty has the
property:
fp(t) = f(t £7). G.5)
Thus, a sinusoidal signal may be shifted in time by Ty seconds, or integer
multiples thereof, without any change.Continuous-Time Signals and Operations 39
3.1.2.4 Complex exponential
The complex exponential signal is useful as an eigenfunction® for linear
system analysis:
g(t) = ei, (3.6)
By invoking Euler’s formula, e/? = cos() + j sin(p), we see that
the real and imaginary components of the complex exponential signal are
sinusoids: Re{e/**} = cos(wt), Im{e/**} = sin(wt).
3.1.2.5 Unit step function
The unit step function u(t) is defined as:
0, t<0;
u(t) = 41/2, t=0; (3.7)
co 0.
1 -_
0.5 .
t
0
Figure 3.6. Unit step function u(t).
3.1.2.6 Signum function
The signum! function sgn(t) is defined as:
—1, t<0;
sgn(t) -| 0 tO; (3.8)
1, t>0.
© When the input to a linear system is an eigenfunction, the system output is the same signal
only multiplied by a constant.
4 “Signum” is the word for “sign” in Latin.60 Practical Signal Processing and its Applications
0- .
~————————————
>t
0
Figure 3.7. Signum function sgn(t).
3.1.2.7 Ramp function
The ramp function r(t) is defined as:
—f0,t<0;
ros ts t>0. G9)
10
0 T T T T me
Figure 3.8. Ramp function r(¢).
3.1.2.8 Rectangular pulse
The rectangular pulse function rect(t) is defined as:
1, |tl<1/2;
rect(t) = 41/2, |t| = 1/2 ; (3.10)
0, |tl>1/2.
The rectangular pulse function has a width of 1 and a height of 1; some
refer to it as a “box” function.Continuous-Time Signals and Operations 61
1 —
12 . .
.—— ee
120. 12
Figure 3.9. Rectangular pulse function rect(t).
3.1.2.9 Triangular pulse
Define the triangular pulse function A(t) to be:
1-2lel, lel < 1/2;
ao =f ees G.11)
0 A >t
1/2 0 1/2
Figure 3.10. Triangular pulse function A(t).
3.1.2.10 Exponential decay
The waveform e~°*, where b is a real constant and b > 0, decays to zero
as t oo and grows without bound as t > —0o. Define the exponentially
decaying signal® = u(t)e~**, which is shown in Fig. 3.11.
1
12
0s Aero oma Alsen e non igmiuaialia ds
Figure 3.11. Exponentially decaying signal u(t)e~°223"_
© An equivalent expression for the decaying exponential signal u(t)e~* is u(t)a‘, where
a=e"(0t
“10x 8x -6n 4m 2m) GS 810
Figure 3.12. Function sinc(¢)
3.1.3 Signal properties
3.1.3.1 Energy and power signals
The energy of a continuous-time signal is defined as the area of its
magnitude-squared value:
Ey =f |x(e) de = f@, x(e)x*(e) dt. (3.13)
Clearly, E, is always real and non-negative. The energy value is a
measure of how much work this signal would do if applied as a voltage
across a 10 resistor. Signals having finite energy are called energy
signals, Examples of energy signals are rect(t), A(t), u(t)e~* and
sinc(t).
Because one cannot directly evaluate sinc(0) = sin(0)/0 = 0/0, apply 'Hospital's rule:
Jim sin(G)/o = lim (a sin()/d9)/(db/d) = lim cos(p)/1 = 1Continuous-Time Signals and Operations 68
For periodic and some other signals the integration result of Eq. (3.13)
will be infinite: E,, = 00, In that case a more meaningful measure may be
the power of x(t), its average energy per unit time:
1 pT /2 2
P, = dim (ES, OP dt)
= fim (5777, xx" at). (3.14)
Signals having finite, nonzero power are called power signals.
Examples of continuous-time power signals are u(t), sgn(t) and e/”*,
whose power values are {1/2,1,1} respectively. Power signals have
infinite energy, and energy signals have zero power. Some signals fall in
neither category: for example, r(t), 5(t) and 6;(t) all have both infinite
energy and infinite power.®
Periodic signals have power = (Energy in one period) / (time span of
one period). This may be expressed as P,, = = frl%o(O|” dt, where
periodic signal x,(t) has period equal to Ty sec. Based on this relation it
is easy to show that the power of sinusoid A cos(wt + 0) is |A|?/2."
3.1.3.2 Integrable signals
A signal x(t) is said to be absolutely integrable when:
Sl xO dt = f° (xx O dt < 0. (3.15)
A signal x(t) is said to be square-integrable when:
| ees (3.16)
We see, therefore, that square-integrable waveforms are energy signals.
® These signals are still very useful as mathematical tools, however, as will be shown in
later chapters.
" The energy in one period of a sinusoid having amplitude A and period Ty is |A|?To/2.64 Practical Signal Processing and its Applications
3.1.3.3 Periodic signals
A signal x(t) is periodic when there are some nonzero time shifts T for
which the following is true:
Xp(t) = xp(t +7). (3.17)
The fundamental period of xp(t) is the smallest positive value T = To
that makes Eq. (3.17) true. A periodic signal has the property x)(t) =
Xp(t + kT), for any integer k.
The fundamental frequency, in repetitions or cycles per second (Hertz),
of a periodic signal is the reciprocal of its fundamental period: fy = 1/To
Hz. Frequency may also be measured in radians per second, as we do in
this textbook, which is defined as w = 27f .' Thus, fundamental frequency
Wo = 21/Tp rad/sec.
3.1.3.4 Sum of periodic signals
The sum of two or more periodic signals is only periodic if the frequencies
of these additive components are harmonically related: that is, the
frequency of each component is an integer multiple of a fundamental
frequency Wo. As a result, for any two harmonically-related components
in the sum having frequencies w, = kyWo and wz = k2Wo, the ratio
@1/W2 = k,/kz isa rational number. The fundamental period of the sum
of harmonically-related signals having fundamental frequency wo is Ty =
21/Wo sec.
‘The term radians is derived from radius of a circle. When a wheel having radius r = 1
rolls along the ground for one revolution, the horizontal distance travelled is equal to the
wheel’s circumference c = 2nr (or 2m radii). Thus f = 1 cycle per second corresponds
to @ = 2m(1) = 2m radians/sec. We use rad/sec measure to describe frequency since it
simplifies notation somewhat (e.g. cos(wt) as compared to cos(27ft)), but in practice
engineers almost always specify frequency as f in Hertz (Hz).Continuous-Time Signals and Operations 65
For example: x(t) = cos(5t) + sin(1.5t) is periodic, having funda-
mental frequency Wy = 0.5 rad/sec’ and period Ty = 47 sec, whereas
y(t) = cos(St) + sin(zt) is not periodic because w,/w2 = 5/7 is irra-
tional.
Each additive term in a harmonically-related sum of sinusoids having
frequency w = kw) is identified by its harmonic index: k = w/w. Thus,
in the example above, the term sin(1.5t) in x(t) has harmonic index 3 (is
“the 3 harmonic” of w. = 0.5 rad/sec).*
3.1.3.5 Even and odd signals
A signal x(t) is even if it is unchanged after time reversal (replacing t
with —t):
Xe(t) = xe(—t). (3.18)
A signal x(t) is odd if it is negated by time reversal:
Xo(t) = —x9(—t). 3.19)!
Every signal may be expressed as a sum of its even and odd components:
y(t) = ye(t) + y(t), (3.20)
where ye(t) = 50(6) + y(-t)), (3.21)
and Yo(t) = 307) — ¥(-8). (3.22)
Signals that are even and are described in this chapter are 5(t), 5;(t),
cos(wt), rect(t), A(t) and sinc(t). Some odd signals are sin(wt) and
sgn(t). Signals u(t), r(t), e/*, and u(t)e~"* are neither even nor odd.
i Notice that even though the fundamental frequency of x(t) is 0.5 rad/sec there is no
individual component having that frequency.
* The fundamental frequency is the I harmonic of a periodic signal, although it is rarely
referred to as such.
"A consequence of Eq 3.19 is that every odd signal has x,(0) = 0, because only value zero
has the property that 0 = —0.66 Practical Signal Processing and its Applications
3.1.3.6 Right-sided and left-sided signals
If x(t) = 0 for t < T, where T is some finite time value, then x(t) is a
right-sided signal. For example, u(t + 3) is right-sided because it equals
zero for t < —3. Ifx(t) = 0 fort > T, where T is some finite time value,
then x(t) is a left-sided signal. The signal cos(t)u(—t) is left-sided
because it equals zero for t > 0.
3.1.3.7 Causal, anticausal signals
A signal x(t) is causal if x(t) =0 for all t <0." A signal x(t) is
anticausal if x(t) = 0 for all t > 0. Both causal and anticausal signals
may have a nonzero value at t = 0, hence only one signal is both causal
and anticausal: ad(t), where a is a constant. Every signal may be
expressed as the sum of causal and anticausal components, as for example:
x(t) = x(t)u(t) + x(t)(1 — u(t)). Causal signals are a subclass of right-
sided signals, and anticausal signals are a subclass of left-sided signals.
Causal signals described in this chapter are 6(t), u(t), r(t), and u(t)e~"*.
Reversing a causal signal in time makes it anticausal, and vice versa.
Therefore 6(—t) = 6(t), u(—t), r(—t), and u(—t)e”* are all anticausal
signals.
3.1.3.8 Finite-length and infinite-length signals
If x(t) = 0 for |t| > T, where T is some finite time value, then x(t) is a
finite-length signal. When no such value of T exists then x(t) is an
infinite-length signal. Finite-length signals are both right-sided and left-
sided. Finite-length signals described in this chapter are 6(t), rect(t) and
A(t). The signals 5;(t), A cos(wt + 9), sinc(t), sgn(t), u(t), r(t),
et, and u(t)e~* all have infinite length.
™ The term “causal” comes from the behavior of real-world systems. If, in response to an
impulse at t = 0, a system outputted a signal prior to t = 0, it would be basing its output
on the knowledge of a future input. Nature’s cause-and-effect relationship is that a
response at present time can only be caused by past and present stimuli. When driven by
causal input signal (such as an impulse at t = 0), real-world systems will produce a causal
output signal.Continuous-Time Signals and Operations 67
3.1.4 Continuous-time signal operations
3.1.4.1 Time delay
When signal x(t) is shifted to the right by ty seconds to give x(t — to),
we call this a tg-second time delay. On the other hand, a shift to the left is
called a time advance. Figures 3.13 and 3.14 demonstrate these concepts:”
1
“lb -2 0 1 3/2,
Figure 3.13. Signal rect(t — 1), which is rect(¢) after 1-sec delay
-l -1/2 0 1/2 1 3/2
Figure 3.14. Signal rect(¢ + 1/2), which is rect(t) after 1/2-see advance.
Note that the endpoints of rect(t) are at time values t = {—1/2, 1/2},
while the endpoints of rect(t — 1) are at t— 1 = {—1/2,1/2} and the
endpoints of rect(t + 1/2) are at t+ 1/2 = {-1/2,1/2}. In Fig. 3.15,
the time-delayed impulse 5(t — 4) is located at t = 4 (since 8(t) is loca-
ted at t = 0, 5(t — 4) is located at t— 4 = 0):
ft
< t >t
0 4
Figure 3.15. Delayed impulse function 8(t — 4).
” For ease of graphing, when the exact value of rect(¢) at a discontinuity is not important
we will draw it as shown68 Practical Signal Processing and its Applications
When, in a functional expression for signal x(t), every occurrence of
t is replaced by t — to, the result is that x(t) gets delayed by to seconds.
This is demonstrated in Fig. 3.16:
x(t— 2.5)
x(0) = ue iste
2.5
Figure 3.16. Delayed exponentially decaying signal.
3.1.4.2 Time reversal
Replacing every occurrence of t with - t in an expression for x(t) causes
the signal to flip about t = 0 (only point x(0) stays where it originally
was). Time-reversing a causal signal makes it anticausal, and vice-versa.
Time-reversing an even signal has no effect, while time-reversing an odd
signal negates the original. Here is what happens when signal u(t — 2) is
reversed in time:
Sao :
Figure 3.17. Signal u((—t) - 2).
Fig. 3.17 demonstrates an interesting (and often confusing) concept:
the signal shown could either be a delayed unit step function that was then
time-reversed, or a time-reversed unit step function that was then time-
advanced:
u((-t) — 2) = u(-(t + 2)). (3.23)
3.1.4.3. Time scaling
Time scaling a continuous-time signal is done either by replacing time
variable t with at (called “time-compression by factor a”), or by replacing
time variable t with t/a (called “time-expansion by factor a”). The twoContinuous-Time Signals and Operations 6
operations have the effect of compressing the signal toward t = 0, or
expanding the signal away from t = 0, respectively (a > 0).
Regarding compressing a signal in time: one may be tempted to think
that more and more information may be compressed in time, ultimately
reaching the situation that all of the world’s knowledge is represented by
a near-zero-length signal. Theoretically this is possible, but it comes at a
price: when a signal is compressed in time the rapidity of changes in its
wave shape is increased by the same factor. Practical communications
systems are limited in how rapidly signals they pass may vary vs. time, so
that the dream of transmitting an infinite amount of information in zero
time cannot be achieved in practice.°
3.1.4.4 Cumulative integral and time differential
The cumulative integral of continuous-time signal x(t) is itself a function
of independent time index t:
Cumulative Integral {x(¢)} = Seo x(t) dt. (3.24)
Here are some signals stated in terms of the cumulative integral operation:
u(t) = f°, 6@) dr, (3.25)
r(t) = fi,,u@) dr, (3.26)
rect(t) = f°, {5(¢ +4) —5(c-4)}ar. (3.27)
Related to the cumulative integral is the differential operator:
Differential of x(t) = SO. (3.28)
Here are some signals stated in terms of the differential operation:
° As we will see in Chapters 6-8, the rapidity of signal change vs. time is determined by a
system’s frequency bandwidth.70 Practical Signal Processing and its Applications
6) = Zuo}, (3.29)
u(t) = 20), (3.30)
8( +5) -8(¢-3) = Strect(e)}. 31)
3.1.4.5 Conjugate, magnitude and phase
The conjugate, magnitude and phase of signal g(t) are found according to
the normal rules of complex algebra:
9 (t) = Re{g(t)} — j Im{g(o)}, (3.32)
lg@l? = Refg@)} + Im{g()Y = 9a", (3.33)
Zg(t) = Tan“ (eh. (3.34)4
3.1.4.6 Equivalent signal expressions
Basic signals presented in the previous section may be related to one
another via algebraic operations, or combined to give other signals of
interest. For example, here are different ways of describing the
rectangular, signum and unit step functions in terms of one another: °
rect(t) = u(t + 1/2) — u(t — 1/2), (3.35)
rect(t) = u(t + 1/2) u(-t + 1/2), (3.36)
sgn(t) = 2u(t) —1, (3.37)
sgn(t) = u(t) — u(—t), (3.38)
u(t) = 1—u(-t). (3.39)
Note that the operation is performed on the entire signal g(t).
4 Evaluate the Tan~* function to produce a phase angle in all four quadrants; this is done
by noting the signs of imaginary and real components of g(¢) prior to division
* Plot these to convince yourself that the expressions are equivalent!Continuous-Time Signals and Operations Tn
3.1.5 Convolution
One of the most useful operations for signal processing and linear system
analysis is the convolution operation,* defined here for continuous-time
signals:
Convolution | x(t) * y(t) = f@
=-co
x(t)y(t — t) dt. (3.40)
By replacing t — t with @, we obtain an equivalent form of this integral:
Speco ¥I*(E — $) do = y(t) * x(0). G.41)
Thus the convolution operation is commutative:
x(t) «y() = y(t) * x(0). (3.42)
3.1.5.1 Convolution with an impulse
Let’s see what happens when we convolve signal x(t) with the impulse
function 5(t):
0
x(t) + 6(t) = f7_,, x@é(t — 1) dr,
= fP_.x()5(t— 2) dr,
=x(t)f_.,d(t- t) dr = x(t). (3.43)
This result, x(t) + 6(t) = x(t), shows us that convolving with an
impulse function is an identity operation in the realm of convolution.
More interesting and useful for signal processing, however, are the
following relations that are also easily derived:
x(t) * 5(t- t) =x(t- t), (3.44)
x(t) * 5(t + t2) =x(t + ty). (3.45)
5 The convolution operation is also called a “convolution product” because of its similarity
to multiplicative product notation.2 Practical Signal Processing and its Applications
Thus, we see that a signal may be delayed or advanced in time by
convolving it with a time-shifted impulse function. As will be discussed
in Ch. 8, convolution plays a critical role in describing the behavior of
linear time-invariant systems.
3.1.5.2 Convolution of two pulses
Define “pulse” p(t) to be a signal that is zero outside of a finite time span
t, St < ty, having pulse width w = t, — t,. Given two pulses p,(t) and
pp(t), we can state the following about their convolution product x(t) =
Pat) * py (t):
¢ x(t) will also be a pulse; the pulse width of x equals the sum of pulse
widths of pq and pp: Wy = Wp, + Wp,;
© The left edge of x(t) will lie at a time that is the sum of {time at left
edge of pa(t)} and {time at left edge of p, (t)};'
e The right edge of x(t) will lie at a time that is the sum of {time at right
edge of pq(t)} and {time at right edge of p, (t)}."
3.1.6 Cross-correlation
Cross-correlation is used to measure the similarity between two signals,
and for energy signals this operation is defined by an expression similar to
convolution:
Cross-Correlation Pxy(t) = f°, x*@y(t + 2) dr. (3.46)"
' Assume pq(t) = 0 outside of t, < t < tz and that p,(t) = 0 outside of t3 St < ty.
Then x(t) = pa(t) * Pot) = f°, Pa(t)Po(t — tde = fc" a(t)py(t — t)dr due to the
limited time span of pa(t). By considering the limited time span of pp(t) we note that
x(t) = 0 when tt, < ty (t ty (t> ty +4). Thus x(t) =0
outside of time range t; +t; 0) or Cyy(to) = -1
(constant a < 0). Cxy(to) = 0 occurs for the case that x(t — to) and y(t)
© Goncharoff, V., J. E. Jacobs, and D. W. Cugell. “Wideband acoustic transmission of
human lungs.” Medical and Biological Engineering and Computing 27.5 (1989): 513-519.
4 Because both x(t) and y(t) are real, byy(t) = byx(—t), Cxy(t) = Cyx(—t), and thus
only time reversal distinguishes the similarity between x(t) and y(t) from the similarity
between y(t) and x(t).Continuous-Time Signals and Operations 1
are orthogonal © to one another — that is, they cannot be made any more
similar-looking by multiplying one of them by a nonzero constant. The
beauty of this normalization is that a threshold value may be set (e.g., 0.75)
above which two continuous-time signals are judged to be similar in
appearance, and this threshold is independent of the amplitude scaling
factors of either x(t) or y(t).
3.2.5 Application of convolution to probability theory
In probability theory, probability density function fy (x) is used to describe
the probabilistic nature of continuous random variable X. (The probability
that a sample of X will fall in amplitude range x, < x < x; is the area of
fx(x) over the range x, < x < xX.) Given two independent continuous
random variables X and Y having arbitrary probability density functions
fx(x) and fy(y), respectively, what is the probability density function of
their sum X + Y? The answer is a convolution product:
fary@ = fk @ * fr. (3.55)
3.3. Useful MATLAB® Code
A continuous-time signal is defined at every instant of time, so that even
over finite time spans there are infinitely many points of data. To represent
an analog signal using a digital computer having a finite amount of storage
memory, one must either reduce these signal data to a few parameters (e.g.,
for a sinusoid: amplitude A, frequency w, and phase @), which is a
parametric representation, or sample the signal at certain times over a
limited time span to give a list of numbers, which is a sampled
representation. Parametric representations of signals are the most accurate
and compact, but are only possible for certain idealized waveforms.
Typically, real-world signals cannot be represented parametrically without
some approximation error. Therefore, signals discussed in this chapter
© The definition of orthogonal signals is this: x(t) and y(t) are orthogonal over all time
when [°,x*(t)y(t)dt = J, x(t)y"(Hdt = 0.B Practical Signal Processing and its Applications
will be manipulated and plotted in a sampled representation using
MATLAB?®. In the following demonstration code, we chose the number
of sample points, uniformly spaced across the time range being analyzed,
large enough so that sampling effects are not noticeable."
3.3.1 Plotting basic signals
The basic plotting command in MATLAB? is plot (x,y) , which plots
a waveform by connecting points (%1,71), (%2,¥2),---»(%w, Yu) using
straight line segments. Arrays x and y store the ordered data rectangular
components {x1,%2,--.,%} and {y1,V2,---. Yn} respectively:
% Example_Ch3_1
% Plotting ten cycles of a sine wave
Npts = 1000;
x = linspace(0,10,Npts) ;
omega = 2*pi;
y = sin(omega*x) ;
plot (x,y)
xlabel(‘time in seconds')
ylabel (‘amplitude’)
title('sin(\omegat), \omega = 2\pi rad/sec')
Figure 3.20. MATLAB® plot of y(t) = sin(2t)
MATLAB? permits the user to define functions, which can simplify
the plotting of combinations of building-block signals. The following is
an example of using this method to plot the signal y(t) = u(t + 1.5) +
rect(t/2) + A(t) — u(t — 1.5):
function Example_Ch3_2
% Plotting y(t) = u(t+1.5)+rect (t/2) +Delta (t) -u(t-1.5)
The effects of sampling an analog signal are discussed in detail in Chapter 6Continuous-Time Signals and Operations 9
Npts = 10000;
t = linspace(-2,2,Npts) ;
y = u(ttl.5) + rect(t/2) + Delta(t) - u(t-1.5);
plot (t,y)
axis([{-2 2 -1 3])
xlabel('time in seconds')
ylabel ('amplitude')
title(['A plot of y(t) = u(t#1.5) + 'I, ...
['rect(t/2) + \Delta(t) - u(t-1.5)'])
grid on
end
function f = u(t)
% define the unit step function
£ = zeros(size(t));
£(t>0) = 1;
£ (t==0) 1/2;
end
function £ = rect (t)
% define the rectangular pulse function
f£ = zeros (size(t));
f£(abs(t)<1/2) = 1;
£(abs(t)==1/2) = 1/2;
end
function £ = Delta(t)
% define the triangular pulse function
f£ = zeros (size(t));
indexes = find (abs (t)<1/2);
f (indexes) = 1 - 2*abs(t(indexes)) ;
A pl y= itt) + ct) +a) ut)
Figure 3.21. MATLAB® plot of y(t) = u(t + 1.5) + rect(¢/2) + A(t) — u(t — 1.5).
3.3.2 Estimating continuous-time convolution
Continuous-time convolution product x(t) * y(t) = ce, x(t)y(t — t)dt
may be expressed as:80 Practical Signal Processing and its Applications
fim, Die-e x(kdn)y(¢ — kbz)Ar. (3.56)
ey
Even if At > 0, this summation will give reasonably good results when
the two signals are relatively slowly-varying in time (x(t) ~ x(t + A) and
y(t) = y(t + A)). Let z(t) = x(t) * y(t). Then, for a suitably small time
increment At,
Z(nAT) © Deen X(kAt)y(nAt — kAt)At
= AT Y- co x(kAt)y((n — k)Ar). (3.57)
In MATLAB® the conv function calculates discrete-time convolu-
tion®, as defined by the following summation:
LR x(kKy(n — k) (3.58)
Therefore, an approximation to z(t) = x(t) * y(t) at times t = nAt
may be obtained by sampling signals x(t) and y(t) at every t = nArt, then
using MATLAB’s® built-in discrete convolution function to process the
two sequences of numbers, and finally scaling the result by At. Implied is
the requirement that the signals x(t) and y(t) be of finite length (zero
outside of some range) so that their sampled representations may be stored
in finite-length arrays without any omissions. In the following example
we demonstrate this method to convolve 2rect(t) with A(t — 1):
function Example_Ch3_3
% Estimating the convolution product of 2rect(t)
% and Delta(t-1)
Npts = 1000;
t = linspace(-2,2,Npts) ;
dTau = t(2)-t(1);
z = dTau * conv(2*rect(t) ,Delta(t-1));
tl = linspace(-4,4,length(z));
plot(t1,z)
xlabel('time in seconds')
ylabel ('amplitude')
title('2rect(t) convolved with \Delta(t-1)')
axis([-3 3 0 1])
grid on
end
88 Discrete convolution is presented in Chapter 2Continuous-Time Signals and Operations 81
function f = rect(t)
% define the rectangular pulse function
f£ = zeros (size(t));
£(abs(t)<1/2) = 1;
£(abs(t)==1/2) = 1/2;
end
function £ = Delta(t)
% define the triangular pulse function
£ = zeros(size(t));
indexes = find(abs(t)<1/2) ;
f (indexes) = 1 - 2*abs(t (indexes) ) ;
end
time in seconde
Figure 3.22. MATLAB® plot of 2rect(t) convolved with A(t — 1).
3.3.3 Estimating energy and power of a signal
The energy of finite-length signal x(t) may be estimated from its samples
using the relation:
By = [7 lx@ Pde = Jim Ye cole (kde) ? Ae
® AT YP olX(kAT) |. (3.59)
In the following MATLAB® example we verify that the energy of the
triangular pulse A(t) is indeed = 1/3 as the theory" predicts:
function Example_Ch3_4
% Estimating energy of the triangular pulse
Npts = 1e6;
hh Energy{A(t)} = f° 1A(t)[2de = Lia. = 2¢)2dt
= 27 — 20)de = 2071-40 + 4t2)dt
= 2(t — 2t? + (4/3)t3)|2/? = 2(1/2 - 1/2 + 1/6) = 1/382 Practical Signal Processing and its Applications
t = linspace(-1/2,1/2,Npts) ;
atau = t(2)-t(1);
E = dTau * sum(Delta(t) .*2);
disp(['Est. energy of triangular pulse is: ',
num2str(E)])
end
function £ = Delta(t)
% define the triangular pulse function
£ = zeros(size(t));
indexes = find(abs(t)<1/2) ;
f (indexes) = 1 - 2*abs(t (indexes) ) ;
end
>> Example_Ch3_4
Est. energy of triangular pulse is: 0.33333
3.3.4 Detecting pulses using normalized correlation
In this example, we synthesize a signal containing a sum of triangular
pulses at various delays and scaling factors, plus noise. Then, using the
approximation to Cyz(t) as shown below, we calculate the similarity
between this synthesized waveform z(t) and a noise-free triangular pulse
x(t). The resulting normalized cross-correlation similarity measure
Cyz(t) is plotted vs. t (which tells us at what times the synthesized
waveform shape is most similar to a triangular pulse):
Cyp(t) = 22 = BEOO
_ 2 VExEz VExEz
“Ckad2((n-1)a
[x(kAe) |? D2
» So C,,(nAt)
(3.60)
function Example_Ch3_5
% Detecting the presence of triangular pulses in a
% noisy waveform
% Note: the convolution implementation of cross-
% correlation is used to take advantage of MATLAB's
% fast discrete convolution algorithm.
Npts = 1e5;
t = linspace(-3,3,Npts) ;
x = Delta(t);Continuous-Time Signals and Operations
figure(1); plot(t,x); grid on
xlabel('time in seconds') ;
title('Triangular pulse function x(t!
)
Y = -10*Delta(t+2) - 5*Delta(t) + 10*Delta(t-2);
figure(2); plot(t,y); grid on
xlabel('time in seconds'); title('Signal y(t):')
z = y + 10*randn(size(t));
figure(3); plot(t,z); grid on
xlabel('time in seconds') ;
title('Signal z(t) = y(t) + noise:')
Cxz = conv(fliplr(conj(x)),z)/
sqrt (sum(abs (x) .*2) *sum(abs(z) .*2));
t1 = linspace(-6,6,length (Cxz));
figure (4); plot(t1,Cxz); grid on
v = axis; axis([-3 3 v(3:4)])
xlabel('time in seconds') ;
title('Normalized cross correlation C_x_z(t):
end
function £ = Delta(t)
% define the triangular pulse function
f£ = zeros(size(t));
indexes = find(abs(t)<1/2) ;
f£ (indexes) = 1 - 2*abs(t(indexes)) ;
end
time in seconds
Figure 3.23. Original signal y(t) that is composed of three triangular pulses.
1
os
os
oa
02
°
time in seconds
Figure 3.24. ‘Triangular pulse x(¢) used for waveform matching.84 Practical Signal Processing and its Applications
Signal z() = y(t) + noise:
°
time in seconds
Figure 3.25. Signal z(t) = y(t) + noise added.
Nomalized cross corelaion G,
Figure 3.26. Normalized cross-correlation result C,,(¢), showing locations and polarities
of triangular pulses that were detected in the noise waveform z(t)
3.3.5 Plotting estimated probability density functions
One application of signal processing is dealing with random signals.
Randomness describes many aspects of real-world behavior, especially in
signal communications: unwanted noise interference, communication path
delay, number of users trying to send messages at the same time, etc.
When a random waveform is sampled to give a random variable, this
random variable may be described by its probability density function
(PDF). The following MATLAB® function estimates and plots the PDF
from a histogram of the random samples:
function [] = plot_pdf(A, range)
% This function plots an estimate of the PDF
% of a random variable from the samples in array A
if nargin == 1
% in case user does not specify range
% of the PDF plot:
Nbins = 100;
A= AC
Hmin = min(A) ;Continuous-Time Signals and Operations 85
Hmax = max (A) ;
Span = Hmax - Hmin;
xange = linspace(Hmin- ...
Span/10,Hmax+Span/10,Nbins) ;
end
dr = range(2) - range(1);
PDF = hist (A(:) ,range) /dr/length(A) ;
plot (range, PDF)
end
In Section 3.2.5 we learned that when two independent continuous ran-
dom variables are added together, the PDF of their sum is the convolution
product of their individual PDF’s. To demonstrate this concept: the
following MATLAB® function generates 10° samples of random variable
X whose PDF f,(x) = rect(x), independently generates 10° samples of
random variable Y whose PDF fy(y) = rect(y), and then adds together
these two lists of sample values to give random variable Z whose PDF
fz(2) should equal to the convolution product of the other two PDF’s:
A(a/2) = rect(a) * rect(a)."
MATLAB Example_Ch3_6
Demonstrate that the PDF of the sum of two
independent random variables is the convolution
product of their individual PDF's
Npts = 1le6;
X = rand(1,Npts)-1/2; % PDF of X is rect (x)
Y = rand(1,Npts)-1/2; % PDF of ¥ is rect(y)
% (Note that Y is independent of x)
%
%
%
%
% create new random variable as the sum of 2 others
Zsx+yY;
% range of amplitude values & plot resolution
range = linspace(-2,2,200) ;
figure(1); plot_pdf (x, range)
axis([-1.5,1.5,0,1.5]); grid on;
xlabel (‘amplitude value')
title('Estimated PDF of random variable X')
figure(2); plot_pdf (¥, range)
axis([-1.5,1.5,0,1.5]); grid on;
xlabel('amplitude value')
title('Estimated PDF of random variable Y')
# Independent variable a used to avoid confusing notation A(z/2) = rect(x) * rect(y)86 Practical Signal Processing and its Applications
figure (3); plot_pdf (Z,range)
axis([-1.5,1.5,0,1.5]); grid on;\
xlabel (‘amplitude value')
title('Estimated PDF of random variable Z = X+¥')
The resulting plots confirm the premise fz(a) = fy(a) * fy(a). It is
as if nature performed the convolution!
Estimated PDF of random variable X Estimated PDF of random variable Y
Figure 3.27. Estimated PDF of r.v. X. Figure 3.28. Estimated PDF of r.v. Y.
Estimated PDF of random variable Z = X+Y
Saar Uae aetaeae OES @SeeceCe 15
amplitude value
Figure 3.29. Estimated PDF of random variable Z = X + Y, demonstrating the fact that
Fea) = fx(@) * fr(a).
3.4 Chapter Summary and Comments
e This chapter introduces basic continuous-time signal types that are
often used to present the concepts of signal processing. Even though
most of them cannot be generated (for example, even a simple sineContinuous-Time Signals and Operations 87
wave has infinite time duration that makes it impossible to observe in
our lifetime), such signals are useful models for waveforms that are
used in practice.
e The Dirac delta function is not well-defined by what it is (zero width,
infinite height, unity area located at the single point t = 0), but is well-
defined by what it does when convolved with other signals. In signal
processing the main purposes of a delta function are to represent non-
zero area at a single point, to achieve waveform time-shift via convo-
lution, to determine a signal’s value at a specific time (Ch. 6), and to
serve as input signal for linear system analysis (Ch. 8).
e The basic signals that we define in this chapter may be used as building
blocks to synthesize more complicated signals.
© Operations such as addition, multiplication, convolution, time shift,
time reversal, differentiation, etc., are useful for manipulating and
combining continuous-time signals. In later chapters, we will make use
of these operations for “signal processing.”
e Almost every continuous-time signal and operation has a discrete-time
domain equivalent. These relationships are evident in the parallel
presentations in Chapters 2 and 3, and may be explained using the
theory of sampling (the topic of Ch. 6).
3.5 Homework Problems
P3.1 Simplify: f°, d(¢)dt
P3.2 Simplify: f°, d(¢)x(t)dt
P3.3. Simplify: 5(t)x(t) =
P3.4 Simplify: 6(¢ — 1)x(t) =88
P3.5
P3.6
Poy
P3.8
P39
P3.10
P3.11
P3.12
P3.13
P3.14
P3.15
P3.16
P3.17
P3.18
Practical Signal Processing and its Applications
Simplify: 6()x(t -— 1) =
Simplify: f°, 5(¢ + 1)x(t)dt
Simplify: f", d(¢)x(t + Dat
Simplify: f°, 6(¢ -— 1)x(t + Dade
Simplify: [°, 5(t — B)x(B)dt =
Given waveform x(t) = t rect(t/3 — 1/2).
a) Find the value of x(4).
b) Find the value of x(1).
c) When w(t) = x(t) * 5¢(t), find the value of w(10).
d) When y(t) = x(t + 3/2) * 6¢(t), find the value of y(—10).
Is x(t) = sin(5t) + cos(4.9t) periodic? If so, what is its period?
Is y(t) = cos(mt/8) + cos(3t) periodic? If so, what is its
period?
Express y(t) = 4cos(5t) as a sum of complex exponential
signals.
Simplify: w(t) + u(-t) =
State in terms of a rectangular pulse function:
u(t+1u(-t-D=
State in terms of a rectangular pulse function:
u(t-—1)-u(t-2) =
State in terms of two unit step functions: —sgn(t) =
State in terms of two unit step functions: rect(t/2) =P3.19
P3.20
P3.21
P3.22
P3.23
P3.24
P3.25
P3.26
P3.27
P3.28
P3.29
P3.30
P3.31
Continuous-Time Signals and Operations 89
State in terms of a rectangular pulse function:
u(t —5) -u(t) =
State in terms of a triangular pulse function:
r(t+3)—2r(t)+r(t-3)=
State in terms of a triangular pulse function:
r(t/2) — 2r(t/2 +2) +r(t/2 +4) =
Calculate the area of f(t) = 2e‘u(t).
Calculate the energy of f(t) = 2e~*(u(t) — u(t — 3)).
Given: g(t) = sinc(mt/2). Calculate g(t) at times
t = {0,1,2,3, 4} seconds.
Find an expression for: Esinc(t)
What is the energy of A(t)?
What is the power of sgn(t)?
What is the energy of rect(t/4)?
What is the power of A(5t)?
Label each of the following signals as {odd, even, or neither}:
a) rect(t)
b) rect(t + 1/2)
c) 6(t-1)-d(t+1)
d) 6(¢-1)
e) sgn(t)
f) sgn(t) rect(t)
Find the even part of each of the following signals:
a) rect(t)90
Practical Signal Processing and its Applications
b) rect(t + 1/2)
c) 6(t-1)-6(t+1)
d) d6(t-1)
e) sgn(t)
f) — sgn(t) rect(t)
P3.32 Find the odd part of each of the following signals:
a) rect(t)
b) rect(t + 1/2)
c) d6(t-1)-6(t+1)
d) 6(t-1)
e) sgn(t)
f) sgn(t) rect(t)
P3.33 The result of multiplying x(t) by u(t + 2) will be:
(check all that apply)
a) causal
b) right-sided
c) anticausal
d) left-sided
P3.34 The result of multiplying x(t) by rect(t + 3) will be:
(check all that apply)
a) causal
b) right-sided
c) anticausal
d) left-sided
P3.35 List three signals that are both causal and anticausal.
P3.36 The product of x(t) and u(t) is guaranteed to be causal:
(True or False?)
P3.37 The product of x(—t) and u(t) is guaranteed to be anticausal:
(True or False?)P3.38
P3.39
P3.40
P3.41
P3.42
P3.43
P3.44
P3.45
P3.46
P3.47
P3.48
P3.49
P3.50
P3.51
P3.52
Continuous-Time Signals and Operations 9
The product of x(t) and 1 — u(t) is guaranteed to be anticausal:
(True or False?)
Time-shifting x(t)u(t) to the left by 5 seconds guarantees that it
is not causal: (True or False?)
Every causal signal is right-sided: (True or False?)
Every left-sided signal is anticausal: (True or False?)
The product of a right-sided signal and a left-sided signal is a
finite-width signal: (True or False?)
Find the cumulative integral of the signal 5(t + 4).
Find the cumulative integral of the signal 6(t + 1) — 6(t — 2).
Find the time derivative of sgn(t).
Find the time derivative of rect(t).
Simplify: u(2t) =
Simplify: u(2(t - 4)) =
Express the following as a rectangular pulse signal:
u(t + 3) -—u(t) =
Express the following as a sum of unit step functions:
rect(t —5) =
Express the following as a product of unit step functions:
rect(t —5) =
Simplify: 5(¢) * sinc(t) =92
P3.53
P3.54
P3.55
P3.56
P3.57
P3.58
Practical Signal Processing and its Applications
Simplify: 26(¢ — 2) « 2sinc(t + 3) =
Simplify: 6(¢ — 4) * x(t) =
Simplify: 54(¢) + 64(¢ — 2) =
Given: periodic signal x, (t) has period 2 seconds, and every 2
seconds it toggles between the values 0 and 1. At time t = 0 sec
there is a 0-to-1 transition. Express xp(t) as a convolution
product of impulse train with rectangular pulse function.
Given: periodic signal x,(¢) has period 2 seconds, and every 2
seconds it toggles between the values 0 and 1. At time t = 1/2
sec there is a 0-to-1 transition. Express xp(t) as a convolution
product of impulse train with rectangular pulse function.
Given: finite-width signal x, (t) = 0 except over —2 < t <5 sec,
and finite-width signal x2(t) = 0 except over 6 The integration may be done over any frequency interval of width 2m rad/sec, which is
the period of periodic spectrum F(e/)
* Identifying the class of signals whose Fourier transforms exist and are invertible is an
advanced mathematical topic.Frequency Analysis of Discrete-Time Signals 95
summation non-convergence may be overcome, and the Fourier transform
made invertible, by permitting the use of generalized functions such as the
Dirac‘ impulse 6(x) in the frequency domain.
Note that, although we discuss Fourier analysis in terms of time varia-
tion and frequency content, n and w are two independent variables that
may have different interpretations in other fields of science, engineering
and mathematics where Fourier analysis is used. As a matter of fact, in
some applications such as digital image processing, Fourier analysis is
even applied to multidimensional functions; e.g., a function of {n,m} is
mapped to a function of {e/n, e/m}.
4.1.2 Fourier transforms of basic signals
Let us now derive, step-by-step, the Fourier transforms of a few of the
basic sequences introduced previously in Ch. 2. For many other functions,
it is straightforward to find the Fourier transform by evaluating Eq. (4.1),
which may be simplified with the help of a table of summations.* In other
instances, application of Fourier transform properties is an excellent tool
available to us. Examples worked out in the following section will provide
experience in such techniques.
4.1.2.1 Exponentially decaying signal
Let x(n) = u(n)a” (assuming |a| < 1). Its Discrete-Time Fourier trans-
form X(e/“) is then found from the definition in Eq. (4.1):
X(e/®) = F{u(n)a"} = Ye__.u(n)ate Jon
|
y jot
= peo ate Jo" = YRo(ae J)" = (4.6)
or u(n)a” and 1/(1 — ae~/“) comprise a “Fourier transform pair”:
4 The Dirac impulse function and its properties are covered in Chapter 3.
© Dwight; Abramowitz et al96 Practical Signal Processing and its Applications
u(n) = (lal < 1) (4.7
Note that X(e/”) = 1/(1 — ae~/) is periodic in w, with period 2
rad/sec, as are all spectra produced by the discrete-time Fourier transform
(Eq. (4.1)). This periodicity is the direct result of the discrete nature of
x(n).2 Only one period of a periodic signal is needed to describe the entire
signal, and thus the inverse discrete-time Fourier transform’s (Eq. (4.2))
range of integration is one period of X(e/). This causal signal, x(n) =
u(n)a", when time reversed becomes the anticausal x(—n) = u(—n)a™",,
whose discrete-time Fourier transform is:"
1
1-ael®
a-"u(-n) © (lal < 1) (4.8)
In preparation for the next derivation, we will calculate the Fourier
transform of the following sequence that is symmetric about n = 0:
F{u(n)a" + u(—n)a™ — 6(n)} = F{u(n)a"} + F{u(—n)a-"} — F{5(n)}
tat l(lal<1), = 4.9)
* ae Jo * Taelo
which simplifies to give another useful Fourier transform pair:
1-a?
ql oe
1-2a cos(w)+a
(lal < 1) (4.10)
‘The Geometric Series Yor" =< when |r| < 1. Whenr = ae~!, |r| = |ae~/”| =
lalle#| = lal. « Ir] <1 lal <1.
© We shall see that, with any Fourier transform pair, when a signal is discrete (as if
multiple by an impulse train) in one domain then its transformed version is periodic in the
other domain.
» F{u(—n)a-} = Deo u(—n)a-Me Jan = 9. are lon = PO (ae) =
1/(1 — ae!) when a] < 1.Frequency Analysis of Discrete-Time Signals 97
4.1.2.2 Constant value
As mentioned previously, a constant waveform such as x(n) = 1 presents
us with the problem that its Fourier transform summation does not con-
verge at values of w that are integer multiples of 27 rad/sec. This sum-
mation may be solved with the help of generalized functions, however.
First, consider the spectrum F(e/*) = 2162,(w), which has a Dirac
impulse located every integer multiple of 27.' What is its inverse Fourier
transform?
1 fe fe 7
F-{2162_(w)} = zl 218_,(w)e/°"dw = x! 276(w)el?"dw
20 Jz 2 Jy
=I 2n5(w)e"dw =f" 6(w)dw =1. (4.11)
Therefore, if the Fourier transform is invertible then F{1} =
2162_,(w). Let us verify this with the help of generalized function x(n) =
lim al"! = 1:
ant
jo) — : In}} = 1-a?
xe ) 7 {lim . } - ya (Samo)
a{e @= Ink k ez;
0 otherwise. @.12y
What kind of function X (ef o) is zero everywhere except at points w =
2nk,k € Z? One candidate is the Dirac delta function impulse train
62_,(w) times a constant (having unspecified impulse areas). To find the
area of each impulse, we note that the area of periodic spectrum (1 —
a?)/(1 — 2a cos(w) + a”) over one period is independent of a:
™ 1-a?
Daaarg 1 = In. (4.13)
cos(w)+a?
We confirm, therefore, that x(n) = 1 and X(ei”) = 2162_(w) area
Fourier transform pair:
' The Dirac impulse train is presented in Chapter 3.
4 Zis the symbol that represents the set of integers (Zahlen is a German word for numbers).98 Practical Signal Processing and its Applications
1 © 278y_(w) (4.14)
4.1.2.3 Impulse function
By substituting 5(n) for f (n) in the discrete-time Fourier transform defi-
nition (Eq. 4.1) we obtain:
F(e!®) = Deo S(neIO" = YIP en S(nJe I“?
= Ynz-o 6(n) = 1. (4.15)
Using similar arguments as before, we may also show that the inverse
Fourier transform of 1 is 5(n). We now have another Fourier transform
pair:
oe
4.1.2.4 Delayed impulse function
Similarly, and just as easily, we may find the Fourier transform of
6(m— No):
errs—=SsirsDC‘C‘i;swWCisSCSCiCsiCKisSi0.
Because the discrete Fourier transform of sgn(n) does not converge in
the ordinary sense, we must consider using functions that approach sgn(n)
in the limit. One such generalized function, among many that may be
considered, is:
sgn(n) = lim {u(n)a" — u(—n)a~"}. (4.20)
Solving for F{sgn(n)} in this formulation:
F{sgn(n)} = lim(Lr=-cofu(n)a” — u(—n)a-"Je/4")
= Tim (i—o0 u(njare Jon — yy _.u(—n)a-"e Jon)
= tim ( < o )
~ goi\i-aeJ® 1-aele
. aeJ® — gei#
res (a ~ ae — =a)
( —2jasin(w) )
lim | —.——_——_,
1 — 2acos(w) + a?
asi
_ jsin(w)
~ cos(w) —1
=(1+e%)/(1- eM). (4.21)
One can also show‘ that F-1{(1 + e-/”)/(1 — e~/*)} = sgn(n).
Therefore, we obtain the Fourier transform pair:
14
sen(n) @ ~ Ss (4.22)
e/dw, which with numerical methods may easily be
=0; -1,n < 0} = sgn(n).
3-208
confirmed to give: {1,n > 0; 0,100 Practical Signal Processing and its Applications
4.1.2.6 Unit step function
As with sgn(n), the Fourier transform of u(n) can be found by expressing
it in the form of a generalized function. Or, now that F{sgn(n)}, F{6(n)}
and F {1} are known, we may express u(n) in terms of these functions and
quickly find its Fourier transform using the linear superposition property:'
u(n) = 3(sgn(n) + 6(n) + 1)
Flu(n)} = ; F{sgn(n) + 5(n) + 1}
= 5 F{sen(n)} + 5 F{5(n)} +5 FC}
teeWto
3 (BED) +2) +3(20 brn(w))
3(22F +1) +2 doq()
1-eJo
1 (ite J@41-e Se
3 (FEES) + 1 den(w)
=3(55) + bin) = Gt bin). (4.23)
Therefore, because this Fourier transform may be shown to be
invertible, we have the Fourier transform pair:
un) 1 5o_(w) + (4.24)
4.1.2.7 Complex exponential function
To find the Fourier transform of complex exponential signal e/o”, which
is essentially a phasor at frequency wo, we begin by solving for the inverse
Fourier transform of frequency-shifted comb function 52_(w — w) using
Eq. (4.2):
' Linear superposition and other properties of the Fourier transform are discussed in Section
4.1.3.1.Frequeney Analysis of Discrete-Time Signals 101
: len ‘
FGan(@ — Wo)} = J", San (w — Wo) ef?" dw
A .
= sla 5(w — wo) ef" dw
el@on
=o" 5(@ — Wo) dw =
1 2m
(4.25)
After multiplying both sides by 2m, we get the desired result:
2m FSoq(w — Wo)} = F- M2 Szq(w — Wo)} = e/0", or
el” > 21 brg(W — Wo) (4.26)
4.1.2.8 Sinusoid
The Fourier transform of Acos(wgn + @) is derived by expressing it in
terms of complex exponential functions using Euler’s identity:
Acos(won + 8) = A(ze/@or*®) 4 2e-J(on*®)), (4.27)
Then, taking advantage of e/0f <> 27 52_(w — Wo):
FAcos(won + 6)} =AF{elon+) + e-Hwon+6)}
= AF{eloorel9 + e7/vone- JO}
=te e® Ffeivon} 4 4 4eHF{e Jeon}
= Ae! rt Sonq(@ — Wo) + Ae? 52q (Ww + Wo),
(4.28)
or cos(won + 8) &
eo’ tbzn(w + Wo) + e915 2_q(w — Wo) (4.29)102 Practical Signal Processing and its Applications
Setting 6 = {0,—7/2} gives us these two Fourier transform pairs:
COS(WgN) & 187, (W + Wo) + 182_(W — Wo) (4.30)
sin(Won) © jnbrq(w + Wo) — jm5z_(W—W)| (4.31)
Note that, as demonstrated above, the Fourier transform of a real even
signal is real, and the Fourier transform of a real odd signal is imaginary.”
4.1.2.9 Rectangular pulse function
The Fourier transform of a rectangular function is derived by direct
summation, applying the definition given by Eq. (4.1):
F(ei”) = Y@__,, rect (n)eJo"
= Dhaene /" = 1+ 2 Yh -1 cos(nw) (4.32)
We now have the Fourier transform pair:
recty(n) = 1+2DK., cos(kw) (4.33)
These and some other discrete-time Fourier transform pairs are summa-
rized in Table 4.1.
™ With sequences: even x even = even, even x odd = odd, odd x odd = even. Also,
Dre Xe(n) = xe(0) + 2 Li=1 Xe(n), and LF_--.oXo(n) = 0. Therefore F{x(n)} =
Lio x(nje JO" = Ye -co(%e(m) + Xo(n))(cos(wn) — j sin(wn)) =
Dea Xen) cos(wn) + E2.-coXo(n) cos(wn) — j LX-coXe(n) sin(wn) —
J Lir=-c Xo(n) sin(wn) = x-(0) + 2 Ln=1 Xe(n) cos(wn) + 0 — j0—
2j D1 Xo(n) sin(wn). « F{x(n)} = x(0) + 2ES1 Xen) cos(wn) —
2j Xt Xo(n) sin(wn), which clearly shows why real and even x(n) has real F{x(n)},
and real and odd x(n) has imaginary F{x(n)}.Frequency Analysis of Discrete-Time Signals
Table 4.1. Table of discrete-time Fourier transform pairs.
103
* Numerical methods confirm that (1+ e~/)/(1~ eI) = 2. 8,5).
Entry jas Refer to
7 Fm) F(e/*) Ee
4.1),
1. |S heFeM)eltde | eae |
al a®u(n) —+_. alo
-3n on 2 0 x Qn 3
Figure 4.2. From Example 4.1: F{x(n)} = X(e/”) =2+4YJ., cos(kw) +
ite cos(kw).
4.1.3.2 Time shifting
In practice, quite often we encounter a signal whose Fourier transform is
known but for the fact that it is delayed (or advanced) in time. Here we
ask a question — is there a simple relationship between the two Fourier
transforms?
Consider our experience with sinusoidal signal analysis in a basic
circuit theory course. We know that a time shifted sinusoid at a frequency
@p tad/sec is a sinusoid at the same frequency and amplitude, but now has
a phase offset from its original value. Since the Fourier transform of an
arbitrary time function is a linear combination of various frequencies,”
The inverse discrete-time Fourier transform integral may be written as:
f(n) = (1/2n) f,,, F(e)(cos(wn) + j sin(wn))dw =
(1/2n) J" F(e”) cos(wn) dw + (j/2n) J", F(e/) sin(wn) dw =
(1/2n) J, (Fe(e!) + Fy(e!)) cos(wn) dao + (i/2n) {" (Fe(e) +
F,(e!®)) sin(wn) do = (1/2m) J" Fe(e!) cos(wn) do +
(j/2n) J™_ Fo(e) sin(wn) dw = (1/1) fy Fe(e/”) cos(wn) dw +
G/m) fy Fo(e/”) sin(wn) dw. In other words, f(n) is a weighted sum of sines and108 Practical Signal Processing and its Applications
each of these frequency components will now have a corresponding phase
offset associated with it:
F(nt np) & F(ei®)etiero (4.37%
4.1.3.3 Time/frequency duality
With continuous-time signals, both forward and inverse Fourier transform
expressions involve integration. The expressions for F{f(t)} and for
F-{F(w)} have a similar form, which leads to a relationship called
“time/frequency duality.” With discrete-time signals, however, the
expression for F{f(n)} is a summation due to the discrete nature of f (7).
This also results in the spectrum F (e/”) being periodic. For these reasons,
although the relationship of time/frequency duality exists it is not obvious.
We will revisit this concept later in the chapter when discussing the
discrete-time Fourier Series and Discrete Fourier Transform (DFT).
4.1.3.4 Convolution
Many signal processing applications require either multiplying or
convolving together two signals. As we shall now see, multiplication and
convolution are closely related through the Fourier transform. What may
be a difficult-to-simplify convolution of two signals in one domain is
simply expressed as a multiplicative product in the other domain. Here
we show the correspondence between multiplication and convolution in
time and frequency domains.
cosines at all positive frequencies in the range of integration (where the weights for sines
are (j/1)Fo(e/) and the weights for cosines are (1/7) F,(e/))
Proof: F{f(n + no)} = Lec f(n + noe JO" = Len f (me JoOmFno) =
Lin=-0 f (mje Jometlono = etlomo Yao f(me Jom = et/MoF(F(n)} =
etionop(eie),Frequency Analysis of Discrete-Time Signals 109
If f(n) @ F,(e) and f:(n) © F,(e), then:
fin) * frm) © Fi(e/”) F,(e/) (4.38)
AMA) = el) @F,(e!”) (4397
The “@” operator in Eq. (4.39) above is called a circular, or periodic
convolution that needs to be done because both F,(e/”) and F,(e/) are
periodic (a regular convolution integral will not converge):
F(*) © RC) =r hn File*)Fa(eO)dd. (4.40)
The circular convolution integral has limits over one period only (any
band of frequencies spanning 27 rad/sec), since both F,(e/”) and F,(e/”)
are completely described by one of their periods.
Example 4.2
Find the Fourier transform of x(n) = rects(n) * sinc(2mn/11):
Let f,(n) = rects(n) © 1+ 2a 1 cos(kw) and f2(n) =
mn it
sinc (*) © F rect GS) * b2,(w); then F{f,(n) * f.(n)} =
¥ Proof: F(fi() + fo(m)} = Lita-colLka-m fil fi(n — ke Fen =
Lew filO(DR-« hn — e/#"} = Tew il FUa(n — 1} =
Zhao fill (e-HHF,(e!)) = (Lien fie"), (0!) = Fx(e/®)F(e/”).
» Proof: {fx(M)fal)} = oo film) fle PO" = Teo FUR, (!)}f (ne Ho" =
Zne-c (arf F,(e/®)ei#map) fa(n)eion =
(1/2m) J, Fi (el?) (Zeca e/" fone Jo") dB =
(1/2) fr, (c!8)(32__. fa(njeJ@-P)ap =
(1/2n1) J", Fi(e!#)F(e/@-P)ap = (1/21) F,(e/) © F,(e/”).110 Practical Signal Processing and its Applications
F,(e!)F,(e/”) = {1 + 2 D3, cos(kw)} x & rect(—*-) *
Son(w)}.
2 {1 + 25%, cos(kw)}
30 -2n = 0 n 2n 3a
Figure 4.3. The spectrum X(e/”) = F{x(n)} in Example 4.2.
4.1.3.5 Modulation
In communication applications, we find that an information signal f(n),
such as speech, may be wirelessly transmitted over radio waves by first
multiplying it with a sinusoid at a relatively high frequency Wo (called the
carrier frequency). Therefore, it is important to understand the frequency-
domain effects of multiplication with a sinusoid in time. We find, as shown.
below, that the modulation process shifts F(e/”) by two in frequency.
Consider a signal m(n) = f(n) cos(won). From Eq. 4.1, we have:
F{m(n)} = M(e!) = Yio f() cos(won) eo”
= Diem f(n) (Je/0" + Jeon orion
= Yew fn) (te HOw 4 te“ Hotwa)n)
= DR f(n) eHo-wom 4 0%, Fn) e~orwon
=i F(eM@-0)) +3 F(eitoo)), (4.41)
This is called the modulation property of the Fourier transform:
f(n) cos(won) 4F(e/@-0)) 42 F(ei@*0)) | (4.42)Frequency Analysis of Discrete-Time Signals ML
When allowing for an arbitrary cosine phase shift o, the modulation
property is:
F(R) cos(won +h) & LelbF(eI@-H0)) 41 ei F(eil@ton))
(4.43)
Example 4.3
Plot F{recty9(n) cos((m/6)n)}.
To begin, rectyo(n) @ 1+ 25 }2,cos(kw) from Entry #14 in Table
4.1. This periodic spectrum is plotted in Fig. 4.4:
21
“3n -2n n 0 rk Qn 3x
Figure 4.4. Plot of F{rectyo(n)} = 1+ 2Zj2, cos(kw).
The modulation property (Eq. (4.42)) tells us that F{f(n) cos((/6)n)} =
(1/2)F(ei@-7/9)) + (1/2) F (ef @*7/9)).
Therefore, F{rect,o(n) cos((x/6)n)} = (1/2) (1 + 2 D421 cos(k(w —
n/6))) + (1/2)(1 + 2 Did, cos(k(w +2/6)))=1 + Di2i(cos(k(w, —
1/6)) + cos(k(w + 1/6))) as in Fig. 4.5.
Modulation, therefore, shifts" the energy/power of f(n) from the
original (“baseband”) frequency range that is occupied by F(e/) to the
frequency range that is in vicinity of Wo. If wo is such that the shifted
copies of F(e/) are non-overlapping in frequency, then f(n) may easily
* Modulation, or the frequency shifting of F(e/), is also commonly referred to as
frequency heterodyning12 Practical Signal Processing and its Applications
be recovered from the modulated signal f(n)cos(w9n) through the pro-
cess of filtering, which will be shown in a later chapter.
10.5
3K -2n m usb bine n Qn 3n
Figure 4.5. Plot of F{rectyo(n) cos((m/6)n)} = 1+ D}2,(cos(k(w — 1/6)) +
cos(k(w + 1/6))).
4.1.3.6 Frequency shift
We saw in the previous section that modulation is the multiplication of
Ff (n) by a sinusoid at frequency wo, and it results in the shifting of F(e/”)
by + in the frequency domain. To shift F(e/) only in one direction,
multiply f(n) by et/#on-
F(njeti0or <> F (ei Fo) (4.44)
4.1.3.7 Time scaling
Unlike continuous-time signals, discrete-time (or sampled) signals cannot
be easily stretched or compressed in time by an arbitrary scale factor.
There are two time scaling operations that are simple to implement on
sampled signals, however, and we present them next.
Down-sampling by integer factor a:
Define g(n) = f (an). The sequence g(n) exists only when scaling factor
a isan integer (otherwise a - 7 is not an integer, making f (an) undefined).Frequency Analysis of Discrete-Time Signals 113
Considering only positive, nonzero values of a,” we see that g(n) is a
down-sampled version of f (n): beginning at time index n = 0, and going
from there in both positive and negative directions, only every a‘" sample
of f(n) is retained. Obviously, some information is lost by down-
sampling f(n).° What effect does down-sampling have in the frequency
domain? We answer this question by taking the discrete-time Fourier
transform (Eq. (4.1)) of f (an) to obtain:
flan) = =x(e!#/"), aez*,
where X(e/”) = F(e/”) * D845 5(w — @2m).} (4.45)
To understand the spectrum of f(an), first consider X(e/) that is
defined above in Eq. (4.45). Because of being formed by adding together
a frequency-shifted copies of F(e/”), with frequency shifts that are
equally spaced over 27, X(e/) is periodic with period equal to 2x/a.
Subsequent frequency stretching X(e/”) by factor a gives X(e/”/*) a
period of 27 (rad/sec). Eq. (4.45) leads to the observation that when a
time signal is down-sampled by some factor (compressed in time by
removing samples) its spectrum is expanded by the same factor. In the
frequency domain, down-sampling results in possibly destructive“
overlap-addition of the shifted and stretched copies of F(e/”).
>> Negative values of integer a result in time reversal, which is not our goal here, and
a = 0 creates a constant-level signal g(n) = f (0) that is not very interesting.
© Except when the samples being eliminated all happen to be zero (it is possible —
see the following section on up-sampling).
4 The loss of frequency-domain information, which is a consequence of overlap-adding
spectra, is equivalent to the amount of information lost in the time-domain by eliminating
all but every a‘" sample in f(n). The information content of a signal is a fascinating topic
that is typically presented in engineering courses on probability theory.14 Practical Signal Processing and its Applications
Up-sampling by integer factor a:
(n/a), n/a = integer;
ee {’ H a Go?)
By this definition, sequence g(n) is essentially a copy of sequence
f(n) with a — 1 zeros inserted between each of its samples. We therefore
call g(n) an up-sampled version of f(n). Because all original samples
remain, no information is lost by up-sampling f(n). Once again, we are
interested to see what effect this operation has in the frequency domain.
By calculating the discrete-time Fourier transform of g(n) using Eq. (4.1),
we obtain:
g(n) = i (3). 3= ae © G(el) = Fle) | (4.46)
0, otherwise.
The spectrum of an up-sampled sequence is compressed by factor a,
resulting in a cycles of the periodic spectrum over 27 rad/sec. Equation
(4.46) leads to the observation that when a time signal is up-sampled by
some factor (stretched in time by inserting zero samples) its spectrum is
compressed by the same factor, which shortens the period from 27 rad/sec
to 27/a rad/sec.
4.1.3.8 Parseval’s Theorem
Parseval’s Theorem® relates the energy of a signal calculated in the time
domain to the energy as calculated in the frequency domain. To begin,
assume that sequence f(n) may be complex-valued and that f(n) @
F(e/”). The energy of f(n) is D2 f (n)f*(n)
= Te of M{ZS,, F(e)erda}
© This theorem is also known in the literature as Rayleigh Theorem.Frequency Analysis of Discrete-Time Signals 115
= Tm of EES, F'(e)e"deo}
= = Son F* (eS f(ndeFo"}dw
=2f,,F'(e*)F(e*)do, (447)
leading to a statement of the theorem:
Te? = ZS, lF(e)) do (4.48)
Next, we summarize the properties of the discrete-time Fourier transform
in Table 4.2.
Table 4.2. Table of discrete-time Fourier transform properties.
1. Linearity: (Eq. (4.34)
af,(n) +bf,(n) @ aF,(e”) + bF,(e”)
2. Time shift (Eq. (4.37)
f(n tno) & F(es@)etsoro
3. Convolution: (Eq. (4.38))
fin) * fan) © Fy(e) x F,(e!”)
4. Multiplication: " (Eq. (4.39))
film) x falr) @ E{F,(e!”) © F,(e/”)}
(Continued)
* See Eq, (4.40) for the definition of F,(e) @ F,(e)116 Practical Signal Processing and its Applications
Table 4.2. (Continued)
5. Modulation: (Eq, (4.42))
f(n) cos(won) & ; F(e/@-@0)) + FF (ef roo)
6. Modulation (arbitrary carrier phase): (Eq. (4.43))
i :
Fn) cos(won + $) > SPR (eid) + SM p(eio+o))
7. Frequency shift: (Eq. (4.44)
f(njetioor F(ei@Feo))
8. Down-sampling by factor a € Z*: (Eq. (4.45)
F (an) © (1/a)x(e/*/*),
where X(e/”) = F(e!®) * Samy 6(w — 22m)
9. Up-sampling by factor a € Z*: (Eq. (4.46)
f(n/a), n/a = integer; jaw)
0, otherwise. Jorem)
10. Time reversal: (See#’ below)
Fen) © F(ei-#))
11. Conjugation in time: (Seet below)
Fin) @ F'(ei-)
(Continued)
8 FEF (—n)} = Die f(—n)e IO = Lew f (Ke IO = F(e/-)
oh FEF (n)} = Deo f* (nde FO" = (Lean f(nelon)” =
(Zw f(meIOon)’ = F*(el-#))Frequency Analysis of Discrete-Time Signals 117
Table 4.2. (Continued)
12. Real, even f,(n) = Re{f,(—n)} (See footnote on p. 102)
fen) & Flfe(n)} = f(0) + 2 Zn=1 fe(m) cos(wn)
13. Real, odd f,(m) = Re{-—f,(—n)}: (See footnote on p. 102)
fo(n) @ F{fo(n)} = —2) Linea fo(m) sin(wn)
14. Backward difference:
f(n) - f(m—1) @ (1-e7/) F(ei”)
15. Cumulative sum: (See! below)
Thane (CR) = fen) * u(n) <> LOD + F(e!)52— (00)
16. Parseval’s Theorem: (for energy signals) (Eq. (4.48)
Fy =TRolf (MP? = Zhi alF(e)? deo
17. Cross-Correlation: (for energy signals): (Seeli below)
Pry (n) & X*(e!)¥(e/”)
(Continued)
fC) x ln) = Lipa co fKu(n = W) = Lace fH; fn) * ula)
Fei )u(el@) = F(e/*){1/(1 — I) + 2529 (w)} = F(e/#)/(1-e1*) +
a (e!)Bnq(o) = Fe!) /(1~€°F) + 2F(e™™)baq(w) = Fle!) /(1~ e°!9) +
P(e!) 52_(w), due to the periodicity of F(e!”)
1 bxy(n) = x*(—n) * y(n), therefore F{Pxy(m)} = FO" (nF fy(n)} =
X*(e/*)¥(e/).118 Practical Signal Processing and its Applications
Table 4.2. (Continued)
18. Autocorrelation: (for energy signals) (See“* below)
Pre(n) © X°(e/)X(eH) = [x(e/)]?
19. Duality: (for periodic signals) (Eq. (4.57)
DFT
Fy (n) = Nf (k)
4.1.4 Graphical representation of the Fourier transform
Engineers and scientists like to have an image of mathematical formulas
and functions to facilitate their understanding. Thus far we have presented
graphical representations of only real signals in either time or frequency
domain. The Fourier transform of a real time-domain signal is generally"
acomplex function of w. In this section, we discuss several different ways
of graphing such complex-valued waveforms. Such plots are extremely
useful in understanding the processing of signals through various devices
such as filters, amplifiers, mixers, etc.
Frequency-domain plots are presented with an abscissa (x-axis) scale
of w radians/second in this textbook. Because a discrete-time signal’s
spectrum is periodic, it is sufficient to plot only one period (e.g., |w| < 7).
As you know, w = 2mf where the units of f are cycles/second (Hertz).
This means that the spectrum’s period on the f scale is 1 Hz. Keep in
& F{ry(n)} = X*(e/)¥ (e/), therefore F{g,x(n)} = X*(e/”)X(e/”).
"A “teal world” signal x(n) will be real, and will almost always have x,(n) # 0,
X9(n) # 0. Therefore, since the Fourier transform X(e/) = x(0) +
2521 Xe(n) cos(wn) — 2) L2-1 Xo(n) sin(wn) (see footnote on p. 102),
X(e/) will almost always be a complex function of w.Frequency Analysis of Discrete-Time Signals 19
mind that both w and f are normalized frequency values; the actual
frequency value is recovered by multiplying with factor f, (Hz) / 1 (Hz).™"
4.1.4.1 Rectangular coordinates
In this form of visualization, a three-dimensional plot is created using the
real part of F(e/”), the imaginary part of F(e/”), and w as three axes.
Figure 4.6 shows F{rect3(n — 100)} = 7{sinc(3.5w) * do_(w)}e~/10%
plotted this way, for example:
Figure 4.6. A graph of 7{sinc(3.5w) * 5;,(w)}e~/1 vs. w/n.
It may be more instructive to plot two 2-D plots: one of Re{F(e/”)}
vs. w, and another of Im{F(e/”)} vs. w. For example, Fig. 4.7 shows the
3-D plot of a given F(e/”):
Im(F(e!*))
1 A
0
Re(F(e!))
Vl
1 o> >w
-l 3a 2a 1 0 « 2. 3.
id 7
Figure 4.7. A 3-D graph of complex-valued F(e/)
™ The sampling frequency, in Hz, is f, = 1/T, (Ts is the time difference between
successive samples of the signal).120 Practical Signal Processing and its Applications
Figures 4.8 and 4.9 display the same information, using a pair of 2-D
plots, in terms of its real and imaginary components:
O
oj >o
3a 2a -« 0 n 2a 3a
Figure 4.8. Re{F(e/)} vs. w, corresponding to Fig. 4.7
I
- 0
' : : | : >
‘3a 2a a 0 ¥ 2a 32”
Figure 4.9. Im{F(e/)} vs. w, corresponding to Fig. 4.7.
Yet another way of displaying a complex-valued frequency spectrum
is to draw the path that F(e/) takes, in the complex plane, as w varies
over some frequency range. Essentially this is a projection of the type of
3-D graph shown in Figs. 4.6 or 4.7 onto a plane perpendicular to the w-
axis. The resulting 2-D graph is called a Nyquist Plot. This plot requires
some extra labels to indicate the frequency associated with any point of
interest along the path of the function. Nyquist plots are used for stability
analysis in control theory. In Ch. 7 we show an interesting Nyquist plot
that describes a digital filter.
4.1.4.2 Polar coordinates
As an alternative to the previously-presented graphs of real and imaginary
parts of F(e/”), frequently it is more appropriate to plot magnitude
|F(e/)| and phase 2F(e/”) vs. w. Figures 4.10 and 4.11 show these twoFrequency Analysis of Discrete-Time Signals 121
graphs for a complex frequency-domain function. In general, separately
plotting magnitude and phase components is the preferred method for
displaying complex frequency-domain waveforms.
|(1 + 2Z%-1 cos(kw))e/?|
a
L l yo
2a -léa -\2e 084 047 0 04a 08% 12nr 16x 2n
Figure 4.10. A graph of |(1 + 2 D3; cos(kw) )e~/| vs. w.
[ett
L L
Qe léz-l2e D&e ar 0 ds 08s 12e Lor 2e
Figure 4.11. A graph of 2{(1 + 2 D2, cos(kw))e~/“} vs. w.
4.1.4.3 Graphing the amplitude of F(e/”)
Consider the case where the Fourier transform gives a real, bipolar
result: for example, F(e/”)=1+ 2Y%-,cos(kw). Plotting its
magnitude gives us the same graphs as in Fig. 4.10, and a plot of the phase
is shown in Fig. 4.12. Notice that whenever (1 + 2 D2_, cos(kw)) < 0,
the phase angle is +7 so that |(1+ 2 D2_, cos(kw))|e/@™ = —|(1 +
2Y2_1 cos(kw))|.™
™ Any odd integer value of k will make e/** = —1; the values chosen alternate so that the
phase angle is an odd function of w122 Practical Signal Processing and its Applications
-2n -lom -l2r 084 047 0 O47 O8 12” 16x 24
Figure 4.12. A graph of 2(1 + 2 D2., cos(kw)) vs. w.
If we look at Fig. 4.12 we notice that this phase plot has minimal
information. This is because the (1 + 2 D%- cos(kw)) function is real,
and phase serves only to specify the polarity of the function. In such case,
it is advantageous to plot F(e/) = (1 +202, cos(kw)) directly, as
shown in Fig. 4.13:
-2n -l6a -l22 -087 -044 0 047 O84 12% 16r 22
Figure 4.13. A graph of 1+ 2D2., cos(kw) vs. w
With access to MATLAB®, all these plots can be readily generated,
thus the choice of which one to use should be based on the clarity of the
information to be conveyed and how it may be used in a particular ap-
plication.
4.1.4.4 Logarithmic scales and Bode plots
Due to the logarithmic nature of human perception (e.g., we hear
frequencies on a logarithmic scale, as reflected by the tuning of a piano
keyboard), and to linearize some curves being plotted, it is common to
graph one or both axes of a spectrum on a logarithmic scale. For example,Frequency Analysis of Discrete-Time Signals 123
one may choose to plot F(e/) vs. logi9(w) instead of plotting F(e/”)
vs. w. Compared to a linear frequency scale, the logarithmic scale expands
the spacing of frequencies as they approach zero and compresses
frequencies together as they approach infinity. Thus, a logarithmic scale
never includes zero (d.c.) nor any negative” frequency values.
When plotting the magnitude spectrum |F (e/)| it is common to use
logarithmic scales for both magnitude’? and frequency. The resulting
graph is then proportional to log|F(e/”)| vs. log(w). A version of the
log-log plot, called a Bode magnitude plot, is where the vertical scale is
10 logio|F' (ce!) |? and the horizontal scale is logi9(w). The Bode phase
plot has vertical scale ZF (e/”) and horizontal scale logy9(w). Bode
magnitude and phase plots are common in system analysis, where they
provide a convenient way to combine responses of cascaded systems.
4.1.5 Fourier transform of periodic sequences
4.1.5.1 Comb function
The comb function, as discussed previously, is another term for an impulse
train. This idealized periodic signal plays an important role in our theore-
tical understanding of analog-to-digital signal conversion, as well as being
a convenient tool for modeling periodic signals.
Let us consider the Fourier transform of a comb function dy (t), which
is given as Entry #19 in Table 4.1:59
°° Negative frequencies do not exist (when defined as #events/second) except as a
mathematical tool for efficient notation. This is clearly shown by a cosine wave at
frequency wp equal to a sum of complex exponentials at frequencies +a: 2 cos(won) =
el (won 4 ei(-wo)n
» Phase is always plotted on a linear scale so that zero and negative phase angles may be
included.
4 We take this as a given, and derive the DFT (Section 4.1.5.3) based on it124 Practical Signal Processing and its Applications
Su(t) = Dee-c 5(n — kM) & Wo 5y,(w)
= Wo Lra-co 5(w — kw), where wo = 2n/M. (4.49)
Clearly both the time domain function and its transform are both comb
functions: one has period M samples, while the transformed function has
period Wy = 27/M rad/sec. We readily conclude that if one function has
closely placed impulses, the other function has widely spaced impulses
and vice-versa.” Figures 4.14 and 4.15 show the two impulse trains when
the period in time domain is M = 4 samples:
Oe +—_—+ +—_+—_+ eee + —+—_6—_1_¢____-» 72
3 “4 0 4 8
Figure 4.14. Impulse train 6,(n).
an |x |x |a fn Ree lara
Qo le [2 2 Ba 2
>o
/2 1
-Sm/2 -2n -3n/2 -n -n/2 0 1, 3m/2 2m 5/2
T
1
NTA
ws
Figure 4.15. Impulse train (11/2)6,./2(w) = F{5,(n)}.
4.1.5.2 Periodic signals as convolution with a comb function
Periodic power signals play a very important role in linear system analysis.
Sinusoidal steady-state analysis of linear electrical circuits is an example
of this. Next, we show that an arbitrary periodic signal can be considered
as a convolution product of one of its periods with an impulse train.
™ There is also a corresponding change in areas of the impulsesFrequency Analysis of Discrete-Time Signals 125
Let us consider the convolution of rect; (n) and impulse train 529(n).
These two signals and their convolution product are shown in Figs. 4.16
through 4.18:
soll.
Figure 4.16. Rectangular pulse rects(1).
1
0 cecccccleccccceccvcccccveeeleccvceevccccnceeeee eeceeee hil
Figure 4.17. Impulse train 839(n).
~AMNMTMMTT.
20 -15 -10)-5 10 ib 2m BS
Figure 4.18. Periodic signal f,(n) = rects(n) * d29(n).
We observe that the resulting signal is a periodic rectangular pulse train
having period N = 20 samples. This may be written as:
fp(n) = rects(n) * dz9(n) = Le. rects(n — 20k). (4.50)
Because of this formulation, we can determine the spectrum of any
periodic signal in terms of the spectrum of one of its periods. In Fig. 4.18,
for example, Ff{rects(n)} = 1+2¥3,-1cos(mw) and F{629(n)} =
(11/10)5;/19(w). Therefore, the periodic signal of rectangular pulses of
width 11 samples and period 20 samples has a spectrum represented by126 Practical Signal Processing and its Applications
F(e/#) = Ffrects(n) * 520(n)} = F{rects(n)} x F{520(n)}
= (1+ 23,1 cos(mw)) x poe)
— Hdke-o(1 + 2Y5,-1 cos(mw))5 (o - k=)
= Ey -w(1+2Dh-1 008 (mk=))5(w-kZ). (451)
Equation 4.51 states that F(e/”) consists of impulses periodically
spaced in frequency with period 27/20 = 1/10 rad/sec, and having areas
(m/10)(1 + 2 D},-1 cos(mkm/10)). Note that the frequency of the k‘"
impulse is k(/10) rad/sec.
Based on this example, we may generalize that the spectrum of any
periodic sequence consists of spectral components present only at a
discrete set of frequencies. These frequencies are all integer® multiples of
a fundamental frequency wo," and members of this set of frequencies are
said to be harmonically-related. Further, the areas of impulses at these
frequencies are determined by the spectrum of one period of the time-
domain signal.
4.1.5.3. Discrete Fourier Transform (DFT)
In the previous example the periodic signal, rectangular pulses of width 11
that repeated every 20 samples, had frequency spectrum (e/”) =
(1/10) Dee--oo(1 + 2 DF,<1 cos(mk 1/10))6(w — k 1/10). Recall, the
expression for F(e/”) = F{f, (n)} was obtained by representing periodic
signal f,(n) as a convolution product of one of its periods rects(n) with
impulse train 629(n). In general, any periodic signal f,(n) may be
represented this way:
ss including zero and negative integers
® Do not be confused by the notation “a9”: this represents the period of impulse train.
5u,(@) = Lie--cs(w — kw), not the frequency of the impulse corresponding to k = 0.Frequency Analysis of Discrete-Time Signals 127
fo) = p(n) * Sy (n),
fom), O 2752,(w — B), we
obtain the following expression for periodic signal f, (n):
Inverse Discrete foe ham
Fourier Transform) f(m) = 2YN=2 Fy) e/a)”
(IDFT)
= IDFT{F,(k)}. (4.54)
In Eq. (4.54) we now have a representation of a periodic signal f,(n)
in the form known as the Jnverse Discrete Fourier Transform (IDFT). The
harmonically-related frequencies kw) = k2a/N (where k is an integer)
are called harmonic frequencies, or harmonics of the fundamental128 Practical Signal Processing and its Applications
frequency Wo = 2x/N.™ The constants F,(k), which are functions of
harmonic index k, are called the DFT coefficients. Just as time-domain
signal f,(n) is discrete and periodic (period = N samples), F,(k) is also
discrete and periodic (period = N samples).
Recall that F,(k) = P(e/*#o), where P(e/*#o) is the discrete-time
Fourier transform of one period of f,(n). Thus, we may obtain an
expression for F,(k) in terms of f,(n):
F,(k) = P(e) = Ye. p(nye Hen
= Lara p(nje IH" = PNG fy(n)eTkoon
a i es
Therefore:
(kan
Discrete Fy(k) = Naa p(n)e iG)”
Fourier Transform
eat = DFT{f,(n)}. (4.56)
Equation (4.56), the solution for F,(k) values using the DFT
expression, is called the Discrete Fourier Transform (DFT), and
resembles the expression for a discrete-time Fourier transform summation.
And Eq. (4.54), the expression for f,(n) as an inverse DFT, is in fact an
inverse discrete-time Fourier transform of a spectrum containing only
impulses; hence its integration over a continuum of frequencies simplifies
to a summation over discrete frequency values. To summarize, Eqs. (4.56)
and (4.54) are the discrete-time Fourier transform and its inverse as
applied to a special category of time-domain signals: those discrete-time
sequences f (n) that are periodic.
The DFT and its inverse, as described by Eqs. (4.56) and (4.54), may
also be derived independently of the discrete-time Fourier transform. This
“ “Harmonics” usually refers to positive integer multiples of wg (see footnote®, p. 123).Frequency Analysis of Discrete-Time Signals 129
approach approximates periodic signal f,(n) with a weighted sum of
harmonically-related basis signals e/*2"/")"_ These basis signals are
mutually orthogonal: one period of ( e/(#127/N)")( @/(k227/N)n)* has zero
area when k, # kz, and nonzero area when k, = kz. Mutually-ortho-
gonal basis signals make it possible to easily solve for coefficients F,(k)
due to the disappearance of all but one cross-product term when summing.
The DFT is a mapping of N (usually real, but possibly complex)
numbers in the time domain to N (usually complex) numbers in the
frequency domain. Both the DFT and its inverse are algorithms that
involve summing a finite number of terms, and if all terms are finite-
valued then the summations are guaranteed to converge.
The DFT coefficients are plotted like the spectrum sketches described
previously, except that now the spectrum will be discrete. Figure 4.19
plots the DFT coefficient spectrum of the periodic rectangular pulse train
from Fig. 4.18.
12 .
0 rttietteattyy rte
4
0 5 10 15 20 25 30 358 ak
Figure 4.19. Discrete Fourier Transform spectrum for periodic signal f,(n) = rects(n) *
529(M) = (1/20) p29 Fok) el? /20",
4.1.5.4 Time-frequency duality of the DFT
Similarity between summations defining the DFT and the IDFT leads to
the following: when f,(n) <= F,(k), it is also true that
Fy(n) © Nfp(k). (4.57)"
“IDFT{N fp ()} =
IZA(N fg (ef aml = (SNA F (Kee“Mn2m/NDK)" = (mn),130
Practical Signal Processing and its Applications
This is the same as saying that the IDFT operation may be done using the
DFT:
Example 4.4
fy(n) = IDFT{F,(k)} =5 DFT{F; (WY.
(4.58)
Given: periodic pulse train xp(n) = rect,(n — 2) * 6,,(n). Write some
MATLAB* code to calculate and tabulate its DFT X,(k) over 0
fr) o>
y f(njeton
n==0
Inverse Discrete-Time
Fourier Transform
(IDTFT)
1 i
ioryg fom L
im | Flelyelnda ce
co
F(e!”)
F(e!”)
Discrete
Discrete, Period Sueoecs Disorete, periodic
(DFT, FFT) eqeney
N=
fo(n) = > folnyeIen/"n > Fd)
cord
Inverse Discrete
Fourier Transform
(IDFT, IFFT)
rol
fo(n) a wD, F,(k) el(t2n/non a F,(k)
f=134 Practical Signal Processing and its Applications
4.2 Practical Applications
4.2.1 Spectral analysis using the FFT
Under certain conditions, the FFT can be used to compute the Fourier
transform of a discrete-time sequence x(n). Recall from Section 4.1.5.5:
the FFT yields the same results as the DFT, only using fewer calculations.
The DFT is a special case of the DTFT when x(n) = xp(n) is periodic.
However, the DFT (and hence FFT) can also efficiently estimate the
spectrum of finite-length x(n). We later consider the two cases separately
(periodic xp (n), finite-length x(n)).
4.2.1.1 Frequency resolution
When we plot X(e/”), which is periodic with period 27 rad/sec, we
assume sequence x(n) has time spacing T, = 1 sec between its samples.”
In general, for an arbitrary T;,
DTFT{x(n)} = X(e/#%), (4.61)
whose period = 27/T, = ws rads/sec.
The DFT (FFT) transforms N samples from the time domain to the
frequency domain. The time samples are from one period of periodic
signal xp(n) = x(n) * dy(n), at times nT, sec {n = 0,1,2,...,N — 1}.
The frequency samples are taken from one period of periodic spectrum
X(e/#Ts), taken at frequencies kAw rad/sec {k = 0,1, 2,...,N — 1}. Thus,
the value of Aw (called the frequency resolution of spectrum X(e/”"s)) is
equal to:
dw =o = me rad/sec. (4.62)
Stated in units of cycles per second (Hz):
% See Chapter 6.Frequency Analysis of Discrete-Time Signals 135
Af=£=+ wz (4.63)
Finer frequency resolution (lower Af) does not necessarily mean there
is more information presented in the graph of the spectrum. For example,
it is common to concatenate some zero-valued samples to finite-length
x(n) before calculating the FFT for the sole purpose of reducing Af of the
plot for a better appearance. But since those added zeros contain no new
information, the information being presented to the viewer remains the
same as before.
However, when the data being operated on by the FFT represent a
longer time duration, the resulting increase in frequency resolution does
indeed provide more information. This is important when trying to detect
the presence of sinusoids in a signal that are close to one another in
frequency. Since the minimum-discernible frequency difference in a
sampled spectral representation is Af = (NT,)~1 Hz, two discernible
peaks can appear in a spectral plot only if they are at least 2Af Hz apart
(peak — dip — peak). To achieve that we may need to decrease Af by
increasing NT;, which is the time duration of the sampled input signal
being analyzed. Longer analysis time gives higher frequency resolution.
4.2.1.2 Periodic sequence
Per Eq. (4.53), the Fourier transform of periodic sequence x, (n) is
F{xp(n)} = SEN FH) 20 52n(w — kaw)
= Di-w FER (mod(k, N))} 6(w — kAw)
= DRAW, (mod(k, N))} 6(w — kAw), (4.64)
where Aw = 21/N. As we see, the spectrum of xp(n) contains impulses
located at uniformly-spaced frequencies w = kAw (k integer) and having
areas AwF,(mod(k,N)). The following MATLAB® code demonstrates136 Practical Signal Processing and its Applications
using the FFT algorithm to calculate and plot the spectrum of a periodic”
sinusoidal sequence:
% generate 4 periods of cosine sequence (10 samples/period)
TO = 10;
N = 4*T0;
n = 0:(N-1);
x = cos ((2*pi/T0) *n) ;
figure
stem(n,x,'filled')
% calculate spectrum using FFT algorithm
dw = 2*pi/N;
X_impulse_areas = dw*fft (x) ;
k = 0:(N-1);
figure
stem(k,X_impulse_areas, 'filled')
05 1
Wyt oUAL yyl
. .
0 5 10 15 20 25 30 35 40
Figure 4.20. A plot of periodic discrete-time sequence x(n) = cos(2mn/10)
The calculated spectrum in Fig. 4.21 is one period of the periodic
spectrum given as Entry #12 in Table 4.1,
COS(Won) © 18 2_(W + Wo) + 18 2_(W — Wy), (4.65)
where the cosine frequency W» = 21/Tp = 2/10 rad/sec, and impulse
at k = 4 represents the frequency w = kAw = k(2n/N) = 4(21/40) =
% Recall from Section 2.1.2.3 that not every sinusoidal sequence is periodic.Frequency Analysis of Discrete-Time Signals 137
2n/10 rad/sec, which is the same as wo." The impulse areas are correctly
calculated to be 7.
4
‘| |
2
1 4
0 5 10 15 20 25 30 35 40
Figure 4.21. A plot of the spectrum of x(n) = cos(2mn/10), calculated using the Fast
Fourier Transform (FFT).
42.1.3 Finite-length sequence
The frequency spectrum of sequence x(n) is found using the discrete-time
Fourier transform, given by Eq. (4.1):
X(e!”) = DR x(njeFe”. (4.66)
Practically all sequences that we analyze are of finite length (equal to
zero outside of some range of sample index values n). Without loss of
generality, assume that x(n) is N samples in length and spans index range
0 length{x(n)}.
Similarly, define y(n) = y(n) * dy(n) and zp(n) = z(n) * dy(n). Then
it may be shown that
IFFT {FFT{x, (n)} - FFT{yp(n)}} = zp(n), (4.70)
where the FFT and IFFT operate on N samples. Essentially this method
calculates a discrete-time circular convolution between periodic xp (n) and
periodic y,(n) to yield periodic result z)(n). And, per our definition of
Zp(n), z(n) is then recovered from one of its periods:**
Z(n+N) =2z,(n), forO The Fourier transform of all generalized functions has been shown to exist (Lighthill)Frequency Analysis of Continuous-Time Signals 159
analysis is even applied to multidimensional functions (e.g., a function of
{x, y} is mapped to a function of {wz, @y}).
5.1.2 Fourier transforms of basic signals
Let us now derive, step-by-step, the Fourier transforms of a few of the
basic functions introduced previously in Ch. 3. For many other functions
it is straightforward to find the Fourier transform by evaluating Eq. (5.1),
which may be simplified with the help of a table of integrals.© In other
instances, application of Fourier transform properties is an excellent tool
available to us. Examples worked out in the following section will provide
experience in such techniques.
5.1.2.1 Exponentially decaying signal
Let x(t) = u(t)e~* (assuming real b > 0). Its Fourier transform X(w)
is then found according to the definition in Eq. (5.1):4
X(w) = Flu(t)e*} = Jo eM eFotat
(5.6)
(5.7)
This causal signal, x(t) = u(t)e~*, when flipped about t =0
becomes the anticausal x(—t) = u(—t)e”*, whose Fourier transform is
similarly shown to be:*
© Abramowitz et al.; Dwight.
‘In this derivation, note that lim eW Ot jolt = jim, en be Jot = jim, ea. jim, e fat =
O- lim e~/** = 0, since b > Oand [e~/"] =1 <0 ve
© Ffu(—te%) = fo ereniotae = f° e@-iwdtay = ee
(when b > 0).160 Practical Signal Processing and its Applications
u(—tert a (b> 0) (5.8)
In preparation for the next derivation, we will calculate the Fourier trans-
form of the following signal that is symmetric about t = 0:
Flu(tje™ + u(—te”"} = F{u(t)e-"} + F{u(—te”*}
1 2b
~ b+jo q b-ja b+w?
(b> 0). (5.9)
Note that u(t)e™* + u(—t)e*t = e-*It|_ We now have another useful
Fourier transform pair:
2b
biel
e
e beta?
(b> 0) (5.10)
5.1.2.2 Constant value
As mentioned previously, a constant waveform such as x(t) = 1 presents
us with the problem that its Fourier transform integral does not converge
at w=0. This integral may be solved with the help of generalized
functions, however. First, consider the frequency-domain function
2n6(w). What is its inverse Fourier transform? The sifting property
makes Eq. (5.2) easy to solve:
F{2n5(w)} = Hf, 205 (w)el*dw
= _ 6(w)el"dw = _ 5(w)dw =
(5.11)
Therefore, if the Fourier transform is invertible then F{1} = 275(w).
Let us verify this with the help of generalized function x(t) =
lime7Pltl = 1:
b30
= lim (<2) =(2 w#0; (5.1295
b30 \b? +02 o, w=0.
X(w) = F {lim ePI
What kind of function X(w) is zero everywhere except at a single point
= 0? One candidate is the Dirac delta function 5(w) times a constant
"Zs the symbol that represents the set of integers (Zahlen is a German word for numbers).Frequency Analysis of Continuous-Time Signals 161
(having unspecified area). To find the area of this impulse, note that the
area of 2b/(b? + w?) is independent of b:
=2n. (5.13)
b+ a2
se, 2b dw = 2Tan-? Ol
We confirm, therefore, that x(t) = 1 and X(w) = 2m 5(w) are a
Fourier transform pair:
(5.14)
5.1.2.3. Impulse function
By substituting 5(t) for f(t) in Eq. (5.1), the Fourier transform definition,
we obtain:
F(w) = f%, 8(t) eI dt = f° 5(t) eI" at
= f°, 6(t)() dt = 1. (5.15)
Using similar arguments as before, we may also show that the inverse
Fourier transform of 1 is 6(t). We now have another Fourier transform
pair:
(5.16)
5.1.2.4 Delayed impulse function
Similarly, and just as easily, we may find the Fourier transform of
5(t — to):
F{5(t — to)} = f°, 8(t — toe" dt = f%, 5(t — toe Fo" dt
a Crt—e (5.17)
This gives us the Fourier transform pair:
8(t — ty) & enfoto (5.18)162 Practical Signal Processing and its Applications
5.1.2.5 Signum function
The signum function is defined in Chapter 3 as
-1, t<0;
sgn(t) -| 0, t=0; (5.19)
1, t>0.
Because the Fourier integral of sgn(t) does not converge in the
ordinary sense, we must consider using functions that approach sgn(t) in
the limit. One such generalized function, among many that may be
considered, is:
sgn(t) = lim {u(t)e~"' — u(—t)e?*} (5.20)
Solving for F{sgn(t)} in this formulation:
F{sgn(t)} = lim(., u(t)eteJt dt — S u(—t)e%eJt dt)
= lim (n° en hte jut qr — Lon ebte-jut at)
30
Si © o-(b+ju)t dt — f° elb-jw)t
= him (5 e dt-fie at)
= ti eel ee
= poo OH |) — CT |_,,
aha 1)\_2
= jim (ist ats) = Fo 621)
One can also show# that F~1{2/jw} = sgn(t). Therefore, we obtain
the Fourier transform pair:
eltdy = = fee @) (cos(wt) + j sin(wt)) dw =
oe = ae Cj
pS, sinc(we)dw, t>
_ @) Jsin(wt) dw =+f% |e] sinc(we)do = 0 t=
= J sine(wt)dw, t <
0, t=0;
£(2), > 0; | 1, t>0;
Ze.rca)Frequency Analysis of Continuous-Time Signals 163
sgn(t) @ 2 (5.22)
5.1.2.6 Unit step function
As with sgn(t), the Fourier transform of u(t) can be found by expressing
it in the form of a generalized function. Alternatively, now that F{sgn(t)}
and F{1} are known, we may express u(t) in terms of these functions and
quickly find its Fourier transform using the linear superposition property*:
u(t) = 3(sgn(t) + 1), (5.23)
Flu} = FE(sen(.) + D} = 2 F{sgn@)} +2 F(,
3) +2 (20 6(w)) = 7 +16(w). (5.24)
Therefore, because this Fourier transform may be shown to be
invertible, we have the Fourier transform pair’:
uO © m6) +75 (5.25)
5.1.2.7 Complex exponential function
To find the Fourier transform of complex exponential signal e/o°, which
is essentially a phasor at frequency wo, we begin by solving for the inverse
Fourier transform of frequency-shifted impulse function 5(w — wo) using
Eq. (5.2):
F{5(w — Wo)} = JP, 8(w — 9) dw
— HSE 8(w = Wo) eda
ae = on
SS, 8(w — wo) dw = (5.26)
h Linear superposition and other properties of the Fourier transform are discussed in
Section 5.1.3.1 on page 168.
ipa & +15(w)} =F @ + F{15(w)} = Esgn(O) +3 = ule).164 Practical Signal Processing and its Applications
After multiplying both sides by 27, we get the desired result:
2n F-1{5(w — Wo)} = F712 5(w — wWo)} = ef”, or
joot => 2m 5(
(5.27)
5.1.2.8 Sinusoid
The Fourier transform of Acos(wot + @) is derived by expressing it in
terms of complex exponential functions using Euler’s identity:
Acos(wot + 0) = A(z e/@ot*®) + 2 e~Hwott8)), (5.28)
Then, taking advantage of e/0f <> 2m 5(w — wo):
F{A cos(wot + 8)} = 4F {ei @ot®) + e-iwot+6)}
=AF{elotel® + e~Heoten/9} = 4/9 F{eluot} + Ae IOP fe-Hoot}
= Ae! 5(w — wo) + Ae~/9n5(w + Wo), (5.29)
or cos(Wot + 0) &
e°n5(w + Wo) + e915 (w — wo). (5.30)
Setting 6 = {0,—7/2} gives us these two Fourier transform pairs:
Cos(Wot) > 16(w + Wo) + 15(w — Wo) (5.31)
sin(Wot) © jm5(w + wo) — jn6(w — wo) (5.32)
Note that, as demonstrated above, the Fourier transform of a real even
signal is real, and the Fourier transform of a real odd signal is imaginary!
J With signals: even x even = even, even x odd = odd, odd x odd = even. Also,
So fe(dt = 2 J>° fede, and [®, f,(t)dt = 0. Therefore F{x(t)} =
SE, xe Mode = f°. (xe(t) + xo(t))(cos(wt) — j sin(wt))dt =
S&,xe(t) cos(wt) dt + f®, x9(t) cos(wt) dt — j [™, xe(t) sin(wt) dt —
FS, Xo(t) sin(wt) dt = 2 JS” xe(t) cos(wt) dt + 0 — j0 — 2j f° xo(t) sin(wt) de.
+ F{x(} = 2 {5° eet) cos(wt) — jxo(t) sin(wt))dt,Frequency Analysis of Continuous-Time Signals 165
5.1.2.9 Rectangular pulse function
The Fourier transform of a rectangular pulse function is derived by direct
integration, applying the definition given by Eq. (5.1):
F(w) = J, rect(t/r)eForat = f° (eiotat
_ eviwr/2 ejwt/2 _— ejwt/2_e-jur/2
jo ja | jo
ee
Soares — darn tsinc(wt/2). (5.33)
We now have the Fourier transform pair:
(5.34)
These and some other continuous-time Fourier transform pairs are
summarized in Table 5.1.
Table 5.1. Table of continuous-time Fourier transform pairs.
Entry Refer to
# fo F(a) Eq #
1 2 © 6.1),
ve x F@jel'dw / fede (5.2)
2 e u(t) ; b>0 Gy
b+jw’
i. eP'u(—t) ; b>0 Gay
b-jo
(Continued )
which clearly shows why real and even x(t) has real F{x(t)}, and real and odd x(t) has
imaginary F{x(t)}.Practical Signal Processing and its Applications
Table 5.1. (Continued )
Entr Refer to
aris fo Fo) Ee
-blt| 2b (5.10)
4 e asia Lda
bt +o?’ ia
5 1 2n5(w) 6.14)
6. 5) i 6.16)
7. S(t — to) en jet (5.18)
8. 2
= 5.22
sgn(t) jes (5.22)
1
9 u(t) 15(w) +— (5.25)
jo
10. elwot 2m 6(w — wo) (5.27)
e 9 15(w + Wo)
" cos(wot + 4) : 630)
+e!?n5(w — Wo)
‘5(w + wo)
12 ' . (531)
C)
costaat) +15(w — wo)
jm6(w + Wo)
13. nna (5.32)
—jnd(w — wo)
rect(t/T,
14 iia T sinc(wT/2) 634)
pulse width = T see
(Continued )Frequency Analysis of Continuous-Time Signals 167
Table 5.1. (Continued )
Entry Refer to
2 fo F@) Eq. #
o
Ww rect (——
15. W sinc(wt) Gr) (5.40)
= bandwidth = WV rad/see
A(t/T) i
16. T a Lathi,
= sinc*(wT/4)
pulse width = T sec 2 p.702
o
Ww Alow. Lathi,
ul = sinc(We/2) (air) ae
an Pp.
bandwidth = W rad/see
18 -t?/(207) ~02w2/2 Lathi,
e ov2me p72
6; (t) = @5y.(@) = es
19. oy O(L— Ty) Wp Vitec 5(W—= Mey) | yan,
Impulse train in time, Impulse train in frequency, p.702
period = Ty sec period wp = 21/Tp rad/sec
5.1.3 Fourier transform properties
In most applications, we will need to derive Fourier transforms of many
different functions. Frequently it is convenient to use some of the
mathematical properties of the Fourier integral given in Eq. (5.1).
Therefore, in this section we will discuss some of the important properties
and then show in a table all other properties that may be useful. Once
again you will find that it is straightforward to verify these entries in
Table 5.2.168 Practical Signal Processing and its Applications
5.1.3.1 Linearity
Let us consider two different functions of time, f,(t) and f,(t), whose
Fourier transforms are known to be F,(w) and F,(w), respectively:
f,@) © F,(@) and f,(t) @ F,(w). Then from the CTFT definition in
Eg. (5.1), for two arbitrary constants a and b, the following relationship*
holds true:
af, (t) +bf2(t) © aF,(w) + bF,(w) (5.35)
With this property, we may find Fourier transforms of signals that are
linear combinations of signals having known transforms. The following
example illustrates this property and shows the plot of the function and its
transform.
Example 5.1
Find the Fourier transform of x(t) that is shown in Fig. 5.1.
To solve this problem, express x(t) as the sum of two rectangular
pulses:
x(t) = 0.5 rect(t/4) + 1.5 rect(t/2). (5.36)
»
0.5
>t
es
Figure 5.1. Signal x(t) to be transformed to the frequency domain in Example 5.1
We see that x(¢) = af,(t) + bf (t): a = 0.5, fi(n) = rect(t/4),
b= 1.5 and f,(n) = rect(t/2). Therefore, applying the linearity
property, we can say that F{x(t)} = aF{f,(t)} + bDF{A(O)}:
k This is called the “Linear Superposition” property. It combines the scaling property
(Flaf(t)} = aF{f(O)}) and the additive property (F{f,(t) + f(t} = FAO} +
F{f,(t))) into a single expression.Frequency Analysis of Continuous-Time Signals 169
F{x(n)} = 0.5 Ff{rect(t/4)} + 1.5 F{rect(t/2)}
= 0.5(4 sinc(2w)) + 1.5(2 sinc(w))
= 2sinc(2w) + 3sinc(w). (5.37)
2 : : : : : : : oe
20 AS -10 5 0 5 1 15 20
Figure 5.2. From Example 5.1: F{x(t)} = X(w) = 2sinc(2w) + 3sinc(w)
5.1.3.2 Time shifting
In practice, quite often we encounter a signal whose Fourier transform is
known but for the fact that it is delayed (or advanced) in time. Here we
ask a question — is there a simple relationship between the two Fourier
transforms?
Consider our experience with sinusoidal signal analysis in a basic
circuit theory course. We know that a time shifted sinusoid at a frequency
@p rad/sec is a sinusoid at the same frequency and amplitude, but now has
a phase offset from its original value. Since the Fourier transform of an
arbitrary time function is a linear combination of various frequencies,!
' The inverse Fourier transform integral may be written as: f(t) = (1/2m) [®, F(w)(cos
(wt) + jsin(wt))dw = (1/2n) J&, F(w) cos(wt) dw + (j/2n) [%, F(w)sin (wt)dw =
(1/2n) [°,(Fe(w) + Fy(@)) cos(wt) dw + j/2m) f°, (Few) + Fy(w)) sin(wt) dw =
(/2m) §2, Fe(w) cos(wt) dw + (j/21) f°, F,(w) sin(wt) dw = (1/1) J,° Few) cos
(wt) dw + (j/) JS” Fo(w)sin(wt) dew. In other words, f(t) is a weighted sum of sines
and cosines at all possible frequencies (where the weights for sines and cosines are
G/m)Fy(w) and (1/7) F,(w), respectively).170 Practical Signal Processing and its Applications
each of these frequency components will now have a corresponding phase
offset associated with it:
f(t + to) > F(w)etito (5.38)"
5.1.3.3. Time/frequency duality
You have doubtless noticed the similarity of integral expressions for the
Fourier transform, F{f(t)} = i f(t)e"J@*dt, and the inverse Fourier
transform, F~1{F (w)} = (1/2n) S73, F(w)e/@'dw.* This similarity leads
to an interesting duality:
If f(t) @ F(w) then F(t) © 2m f(-w) (5.39)
Example 5.2
Find the Fourier transform of sinc(Wt) by applying the time/frequency
duality property.
Equation 5.34 states that rect(t/t) <= tsinc(wt/2). Therefore, by
duality, t sinc(tt/2) <= 2m rect(—w/t) = 27 rect(w/t). After sub-
stituting tT = 2W we obtain 2W sinc(tW) = 27 rect(w/2W). Thus,
we have another useful Fourier transform pair:
sinc(Wt) © = rect (S) (5.40)
™ Proof: Ff(t + to)} = J% f(t t toe dt = f%, fBe EF ag =
SEF Be IB etioroap = etioto SF Be rap = etooP{f(t)} =
etiotor(w).
" The two look even more similar when stated in terms of non-normalized frequency
variable f =: FEF()} = [%, fe Pde, FF (Y} = JP Pedy.Frequency Analysis of Continuous-Time Signals
5.1.3.4 Convolution
171
Many signal processing applications require either multiplying or
convolving together two signals. As we shall now see, multiplication and
convolution are closely related through the Fourier transform. What may
be a difficult-to-simplify convolution of two signals in one domain is
simply expressed as a multiplicative product in the other domain. Here
we show the correspondence between multiplication and convolution in
time and frequency domains.
If f,(¢) & F,(w) and f(t) F,(w), then:
AO* ih) @ A@)F,(@)
AOL & = Fw) + F@)
Example 5.3
Find the Fourier transform of x(t) = rect(t/2) * sinc(mt):
(5.41)
(5.42)°
Since rect(t/2) < 2 sinc(w) and sinc(mt) = rect(w/2z), then
F{rect(t/2) « sinc(nt)} = F{rect(t/2)}F{sinc(xt)} =
2 sinc(w) x rect(w/2m):
9 Proof: FEF (O) * lO} = Jo (I halt - Dare Itdt =
LAOS felt — Dedede = J, AOFUR(t — Didr =
SE, fi@ (eH F,(w)) dt = J, fue Hd x F,(w) = Fy(w) x F,(w).
P Proof FLA OLO} = SX, AhOhWe de = [2 FMF) fe(De "de =
S2,(C1/2m) [2 FBde!®*aB) faeItde =
1/2r) {7 FB) Se he /etdt dp =
(/2m) [7 FB) SP fe HOP de dB = (1/271) [°, Fy(B)F2(w — B) dB =
(1/2m)F,(w) + F,(w).12 Practical Signal Processing and its Applications
—4n ~3] 2p — 0 x Qn oan ae
Figure 5.3. The spectrum X(w) = 2 sinc(w) x rect(w/2m) in Example 5.3.
5.1.3.5 Modulation
In communication applications we find that an information signal f(t),
such as speech, may be wirelessly transmitted over radio waves by first
multiplying it with a sinusoid at a relatively high frequency wo (called the
carrier frequency). Therefore, it is important to understand the frequency-
domain effects of multiplication with a sinusoid in time. We find, as shown
below, that the modulation process shifts F(w) by two in frequency.
Consider a signal m(t) = f(t) cos(wot). From Eq. (5.1), we have:
F{m(t)} = M(w) = f°, F(Ocos(wot)e/* dt
= _ tie Jeotje jatar
= 25° FO(eVe@ et te Harwo)t)dt
=1f? feFo-ootde +4 f° fe Terodtat
= $F (w— Wo) + $F (W + @). (5.43)
This is called the modulation property of the Fourier transform:
f(t) cos(wot) > 3 F(w — Wo) +3 F(w + Wo) (5.44)
When allowing for an arbitrary cosine phase shift @, the modulation
property is:Frequency Analysis of Continuous-Time Signals 1B
f(t) cos(wot + $) @
£(e/®F(w — wo) +e JP F(w + Wo)) (5.45)8
Example 5.4
Plot F{rect(t/3z) cos(4nt)}.
To begin, rect(t/37) <= 37 sinc(w37/2) from Entry #14 in Table
5.1. This spectrum is plotted in Fig. 5.4:
10
4 7 - rw
—4n 0 4a
Figure 5.4. Plot of F{rect(¢/31)} = 3m sinc(w3m/2), from Example 5.4.
The modulation property (Eq. (5.44)) tells us that
FEF (t) cos(4nt)} = $ Fw - 42) +3 F(w - 4n).
Therefore, F {rect(t/37) cos(4nt)} =
(32/2)sinc((w — 42)32/2) + (32/2)sinc((w + 47)32/2),
as shown in Fig. 5.5.
4So, for example, when $ = —1/2: f(t) sin(wot) © 3 F(w — w») — 3 F(w + Wo)174 Practical Signal Processing and its Applications
—4t oO 4n
Figure 5.5. Plot of F{rect(t/37) cos(4t)} = (31/2)sine((w — 4m)31/2) + (31/2)
sinc((w + 4m)31/2), from Example 5.4
Modulation, therefore, shifts’ the energy/power of f(t) from the
original (“baseband”) frequency range that is occupied by F(w) to
the frequency range that is in vicinity of Wo. If wo is high enough so that
the shifted copies of F(w) are non-overlapping in frequency, then f(t)
may easily be recovered from the modulated signal f (t)cos(w(t)). This
will be shown in Section 5.2.4.
5.1.3.6 Frequency shift
We saw in the previous section that modulation is the multiplication of
f (t) by a sinusoid at frequency wo, and it results in the shifting of F(w)
by +p in the frequency domain. To shift F(w) only in one direction,
multiply f(t) by e*/@ot;
f(thet*ot & F(w F wo) (5.46)
5.1.3.7 Time scaling
Another interesting property of the Fourier transform is that of scaling in
time. We consider a signal f(t) and scale independent variable t by
* Modulation, or the frequency shifting of F (w), is also commonly referred to as frequency
heterodyning.Frequency Analysis of Continuous-Time Signals 175
constant factor a> 0, resulting in signal f(at). Depending on the
numerical value of a, signal f(t) is either expanded or compressed in
time.’ Finding the Fourier transform of f (at) when a > 0:
Ff (at)} = J&, flare td = [PVE f(p)e 8/9 a(B/a)
= fh? —jo(B/a) 2 Sie —jB(w/a) sir(2
= Spon f Ble I°O/9 2dp = ZI” fBePOlMag =>F (2).
1
Fora < 0, F{f(at)} = (2). Combining these results:
flat) ar () (5.47)
Equation (5.47) leads to the observation: if a time signal is expanded
by factor a then its spectrum is contracted by factor a.
Example 5.5
Find the Fourier transform of x(t) = rect(2t — 2).
Method 1 — consider x(t) to be the result of first time-shifting, then
time-scaling, a rectangular pulse function:
rect(t) = sinc(w/2) Table 5.1, #14
rect(t — 2) © sinc(w/2)e7/2" Table 5.2, #2
rect(2t — 2) = (1/|2|) sinc((w/2)/2)e/2/2) Table 5.2, #9
rect(2t — 2) © 0.5 sinc(w/4)e7” simplification
Method 2 — consider x(t) to be the result of first time-scaling, then
time-shifting, a rectangular pulse function:
rect(t) © sinc(w/2) Table 5.1, #14
rect(2t) = (1/|2|) sinc((w/2)/2) Table 5.2, #9
rect(2t) © 0.5 sinc(w/4) simplification
rect(2(t — 1)) © 0.5 sinc(w/4)e” Table 5.2, #2
rect(2t — 2) = 0.5 sinc(w/4)e7J? simplification
5 f (at) with 0 < a < 1 is an expansion of f(t), by factor a, away from t = 0; f (at) with
a > 1is.a compression of f(t) in time, by factor a, toward t = 0176 Practical Signal Processing and its Applications
Note that the order of operations does not matter, although Method 1
seems more logical due to the stated argument of the rectangular pulse in
x(t): (2t — 2) > implies first shift, then scaling.
5.1.3.8 Parseval’s Theorem
Parseval’s Theorem relates the energy of a signal calculated in the time
domain to the energy as calculated in the frequency domain. To begin,
assume that signal f(t) may be complex-valued and that f(t) @ F(w).
The energy of f(t) is _ f() fr) dt
- En F*(w)e**da} dt
= ZL FOUL Fe tdt}dw
=£/2, F'@) Fw) do, (5.48)
leading to a statement of the theorem:
LOR a
1
|F(w)|? do (5.49)
Example 5.6
Find the energy of sinc(t).
Energy{sinc(t)} = JE .lsinc(t) |? dt= = LEIP lsine()}1? dw
= =k. |rrect (I dw = eLile (|? dw
at! 42 dy =1 (29?
= 35a" dw =77(2n
Next, we summarize the properties of the continuous-time Fourier trans-
form in Table 5.2.
This theorem is also known in the literature as Rayleigh TheoremFrequency Analysis of Continuous-Time Signals 17
Table 5.2. Table of continuous-time Fourier transform properties.
Linearity (Eq. (5.35))
af, (t) +bf,(t) @ aF,(w) + bF,(w)
Time shift: (Eq. (5.38))
f (t+ ty) @ F(w)e*ioro
Convolution: (Eq. (5.41)
AO+*hO & F,()F,(@)
Multiplication: (Eq. (5.42)
hOLO 2 Ft) + F:@)}
Modulation: (Fa, 644)
F(t) cos(wot) = 4 F(w — Wo) + 4F(@ + wo)
Modulation (arbitrary carrier phase) (Eq. (5.45))
eld elt
FO) cos(wot + 6) & SF(w — wo) + [FW + wo)
Frequency shift: (Eq. (5.46)
f (eto! & F(w F wo)
Duality: (Eq. (5.39)
F(t) © 2nf(-w)
Time scaling: (Eq. (5.47)
F(w/a)
lal
f(athe
(Continued)178 Practical Signal Processing and its Applications
Table 5.2. (Continued )
10. Time reversal: (See * below)
f(-) = Fa)
11. Conjugation in time: (See * below)
f'OQe F(-#)
12. Real, even f,(t) = Re{f.(-t)} (See footnote on p. 164)
felt) © Fife(O)} = 2 J” F(0) cos(wt) dt
13. Real, odd f,(t) = Re{—fo(-0)}: (See footnote on p. 164)
folt) @ F{f(O} = -2i JP F(O sin(wt) de
14. Differential (See * below)
qd
FO 2 ju F(w)
15. Cumulative integral: (See * below)
Si FB)dB = f(t) * ult) & F(w)/jo + nF (0)5(w)
(Continued)
“FE(—O} = [2 f(—te Mtde = J, Be ICMP dB = F(—-w).
“FU O} = JS, f Oe Mat = ([2, Fela)’ = (2, fDeIMat)’ =
F'(-o),
£7 = 4 (2S, F(w)e!*dw) = (1/2m) [°, Fw) £(e!**)dw =
(1/2n) J, F(@) jw el dw = (1/22) f° Gok w)}e!'dw = Fw (w)}, or:
SFO © jor)
* FO uo) = [2 FB)ult - Bab = J", fBAB; FO xu) @ F)
{1/jw + 15(w)} = F(w)/jw + 2F (w)d(w) = F(w)/jw + nF (0)5(w).Frequency Analysis of Continuous-Time Signals 179
Table 5.2. (Continued )
16. Parseval’s Theorem: (for energy signals) (Eq. (5.49))
1
B= { Or a=7-[ FWP do
20
17. Cross-Correlation: (for energy signals) (See ¥ below)
Pry (t) @ X*(W)¥(@)
18. Autocorrelation: (for energy signals) (See * below)
Pax) & X*(w)X(w) = 1X(w)?
5.1.4 Graphical representation of the Fourier transform
Engineers and scientists like to have an image of mathematical formulas
and functions to facilitate their understanding. So far we have presented
graphical representations of only real signals in either time or frequency
domain. The Fourier transform of a real time-domain signal is generally*
acomplex function of w. In this section, we discuss several different ways
of graphing such complex-valued waveforms. Such plots are extremely
useful in understanding the processing of signals through various devices
such as filters, amplifiers, mixers, etc.
In this textbook, plots are presented with an abscissa (x-axis) scale of
w radians/second. As you know, w = 2mf where f has units cycles per
Y bey(t) = x"(-t) * y(t), therefore F{,y()} = Ffx"(-O)}F £y(0)} = X*(w)¥ W).
* Flbey(O)} = X°(@)Y(w), therefore F(dex(} = X*(w)X(w).
* A “real world” signal x(t) will be real, and almost always have x¢(t) # 0, x(t) # 0.
Thus, since the Fourier transform X(w) = 2 Se Ge(t) cos(wt) — jxo(t) sin(wt))dt
(from footnote on p. 164), X(w) will almost always be a complex function of w.180 Practical Signal Processing and its Applications
second (Hertz). In some applications, such as communication engi-
neering, the preferred abscissa scale is frequency f in Hz. This amounts
to a change in scale by factor 27, which can be readily accomplished.
5.1.4.1 Rectangular coordinates
In this form of visualization, a three-dimensional plot is created using the
real part of F(w), the imaginary part of F(w), and w as three axes. For
Figure 5.6. A graph of sinc(w/2)e~/?™ vs. w/r.
example, Fig. 5.6 shows F{rect(t — 20)} = sinc(w/2)e~/?™ plotted
this way. Instead of a 3-D plot, frequently it is more instructive to plot
two 2-D plots; one of Re{F (w)} vs. w, and another of Im{F (w)} vs. w.
Consider Fig. 5.7, which shows the 3-D plot of a given F(w) in
rectangular coordinates (as in Fig. 5.6):
Im(F(o))
1
0
Re(F(w))
x!
1
0 rw
lilor 82 62 An 2x 0 2n 4a 6x 8a 107
Figure 5.7. A 3-D graph of complex-valued F (w).Frequency Analysis of Continuous-Time Signals 181
Figures 5.8 and 5.9 display the same information using a pair of 2-D
plots, one for the real part vs. frequency and the other for the imaginary
part vs. frequency:
1
ef L L L L L L yo
-W0r -82 62 -4a -2n 0 2a 4a 6x 824 10r
Figure 5.8. Re{F (w)} vs. w, corresponding to Fig. 5.7.
-l L L L L L L Lad
“l0r -82 -62 -At -2n 0 2 4a 6h 82 100
Figure 5.9. Im{F(w)} vs. «, corresponding to Fig. 5.7.
Yet another way of displaying a complex-valued frequency spectrum
is to draw the path that F (w) takes, in the complex plane, as w varies over
some frequency range. Essentially this is a projection of the type of 3-D
graph shown in Figs. 5.6 or 5.7 onto a plane perpendicular to the w-axis.
The resulting 2-D graph is called a Nyquist Plot. This plot requires some
extra labels to indicate the frequency associated with any point of interest
along the path of the function. Nyquist plots are used for stability analysis
in control theory.
5.1.4.2 Polar coordinates
As an alternative to the previously-presented graphs of real and imaginary
parts of F(w), frequently it is more appropriate to plot magnitude |F (w)|182 Practical Signal Processing and its Applications
and phase ZF (w) vs. w. Figures 5.10 and 5.11 show these two graphs for
a complex frequency-domain function.” In general, separately plotting
magnitude and phase components is the preferred method for displaying
complex frequency-domain waveforms.
0 L
“Oz 82 -62 dz 2” 0 2” 42 62 84 10a
rw
Figure 5.10. A graph of |sinc(w/2)e~/#/5| vs. w.
-w/5
Or 82 67 -Ar 2r 0 2 dn 62 Se lOz
Figure 5.11. A graph of 2{sinc(w/2)e#*/5} vs. w.
5.1.4.3 Graphing the amplitude of F (w)
Consider the case where the Fourier transform gives a real, bipolar result:
for example, F(w) = sinc(w/2). Plotting its magnitude gives us the same
graphs as in Fig. 5.10, and a plot of the phase is shown next:
¥® Note that the magnitude plot is the same for all 0 in F(«w) = sinc(w/2)e~/®; hence
Fig. 5.10 also pertains to the signal plotted in the previous example.Frequency Analysis of Continuous-Time Signals 183
>w
WOr 84-64 -4e -2n 0 2 42 62 84 107
Figure 5.12. A graph of Zsinc(w/2) vs. w.
You will notice that whenever sinc(w/2) <0, the phase angle is
chosen to be +7 so that |sinc(w/2)|e/4™ = —|sinc(w/2)|.
If we look at Fig. 5.12 we notice that this phase plot has minimal
information. This is because the sinc(w/2) function is real, and phase
serves only to specify the polarity of the function. In such case, it is
advantageous to plot F(w) = sinc(w/2) directly, as shown in Fig. 5.13
below:
>w
Figure 5.13. A graph of sinc(w/2) vs. w.
With access to MATLAB®, all these plots can be readily generated,
thus the choice of which one to use should be based on the clarity of the
information to be conveyed and how it will be applied.
5.1.4.4 Logarithmic scales and Bode plots
Due to the logarithmic nature of human perception (e.g., we hear frequen-
cies on a logarithmic scale, as reflected by the tuning of a piano keyboard),
© Any odd integer value of k will make e/** = —1; the values chosen alternate so that the
phase angle is an odd function of w184 Practical Signal Processing and its Applications
and also to linearize some curves being plotted, it is common to graph one
or both axes of a spectrum on a logarithmic scale. For example, one may
choose to plot F(w) vs. log;9(w) instead of plotting F(w) vs. a. Com-
pared to a linear frequency scale, the logarithmic scale expands the spacing
of frequencies as they approach zero and compresses frequencies together
as they approach infinity. As a result, a logarithmic scale never includes
zero (d.c.) nor any negative“ frequency values.
When plotting the magnitude spectrum |F(w)| it is common to use
logarithmic scales for both magnitude® and frequency. The resulting graph
is then proportional to log|F(w)| vs. log(w). A version of the log-log
plot, called a Bode magnitude plot, is where the vertical scale is
10 log,o|F(w)|? and the horizontal scale is log;9(w). The Bode phase
plot has vertical scale 2F (w) and horizontal scale log;g(w). Bode mag-
nitude and phase plots are common in system analysis, where they provide
a convenient way to combine responses of cascaded systems.
5.1.5 Fourier transform of periodic signals
5.1.5.1 Comb function
A comb function, as discussed previously, is an impulse train. It plays an
important role in our theoretical understanding of analog-to-digital signal
conversion, as well as being a convenient tool for modeling periodic
signals.
Let us consider the Fourier transform of a comb function 57, (t), which
is Entry #19 in Table 5.1:
4¢ Negative frequencies do not exist (when defined as #events/second) except as a
mathematical tool for efficient notation. This is clearly shown by a cosine wave at
frequency Wy equal to a sum of complex exponentials at frequencies two:
cos(wot) = (1/2)e/0" + (1/2) eF-oodt
© Phase is always plotted on a linear scale so that zero and negative phase angles may be
included.Frequency Analysis of Continuous-Time Signals 185
Sr (t) = DP-w S(t — To) © 95... (w)
= Wo Die 6(wW — keg), where Wy = 21/To. (5.50)
Clearly both the time domain function and its transform are both comb
functions: one has period Ty seconds, while the transformed function has
period Wy = 21/To rad/sec. We readily conclude that if one function has
closely placed impulses, the other function has widely spaced impulses
and vice-versa." Figures 5.14 and 5.15 show the two impulse trains when
the period in time domain is Ty = 4 sec:
PoP RE,
Figure 5.14. Impulse train 5,()
NIA
NIA
NIA
NIA
1
NIA
1
NIA
NIA
NIA
>w
-Sm/2 -2n -3n/2 -m -m/2 0 m/2 m 3m/2 2m Sm/2
Figure 5.15. Impulse train (1t/2)5,/2(w) = F{54(t)}.
5.1.5.2 Periodic signals as convolution with a comb function
Periodic power signals play a very important role in linear system analysis.
Sinusoidal steady-state analysis of linear electrical circuits is an example
of this. Next we show that an arbitrary periodic signal can be considered
as a convolution product of one its periods with an impulse train.
Let us consider the convolution of rect(t) and impulse train 52(t).
These two signals and their convolution product are shown in Figs. 5.16
through 5.18:
There is also a corresponding change in areas of the impulses.186 Practical Signal Processing and its Applications
Seren) | tetas
05 0 05
Figure 5.16. Rectangular pulse rect(t).
Foi t.
Figure 5.17. Impulse train 5,(t)
0 1 ' 1 t
25 2 -15 -1 “5 © 5 1 is 2 2%
Figure 5.18. Periodic signal f(t) = rect(t) + 5,(t)
We observe that the resulting signal is a periodic rectangular pulse train
having period Ty = 2 sec. This may be written as:
fp(t) = rect(t) + 6,(t) = Lys-co rect(t — 2k). (5.51)
Using this formulation, we can determine the spectrum of any periodic
signal in terms of the spectrum of one of its periods. For example,
F{rect(t)} = sinc(w/2) and F{d,(t)} = 2 Le. 6(w — nm) in Fig.
5.18. Therefore, the periodic signal of rectangular pulses of width 1 and
period 2 has a spectrum represented by
F(w) = Ffrect(t) + 62(t)} = F{rect(t)} x F{5,(t)}
= sinc(2) x mY. 5(w — kn)
= DeL_om sinc (2) 5(w — kn)
= Vio ™ sinc () 5(w — kn). (5.52)Frequency Analysis of Continuous-Time Signals 187
Equation 5.52 states that F(w) consists of impulses periodically spaced
in frequency with period 7 rad/sec, and having areas msinc(ka/2). Note
that the frequency of the k" impulse is k7 rad/sec.
Based on this example, we may generalize that the spectrum of any
periodic signal consists of spectral components present only at a discrete
set of frequencies. These frequencies are all integer®* multiples of a
fundamental frequency Wo,"" and members of this set of frequencies are
said to be harmonically-related. Further, the areas of impulses at these
frequencies are determined by the spectrum of one period of the time-
domain signal.
5.1.5.3. Exponential Fourier Series
In the previous example the periodic signal, rectangular pulses of width 1
second that repeated every 2 seconds, had frequency spectrum F(w) =
Di=-« 1 sinc(nn/2)6(w — nz). Recall, the expression for F(w) =
Ge {fp (} was obtained by representing periodic signal f,(t) as a
convolution product of one of its periods rect(t) with impulse train 6(t).
In general, any periodic signal f(t) may be represented this way:
fo© = pO * 67,0),
f(t), OS tn
24 20-16 12 8 4 0 4 8 12 16 20 24
Figure 5.19. Exponential Fourier Series spectrum for periodic signal f(t) = rect(t) *
82 (t) = Leo Dn et,
5.1.5.4 Trigonometric Fourier Series
Two other Fourier series formulations are commonly discussed in text-
books. They are the Trigonometric Fourier Series (using sine and cosine
functions), and a compact version of it. We will present these next. As
you will notice, once the Fourier coefficients in one formulation are
determined, the coefficients in all other formulations can be easily found
from them.
Let us consider Eq. (5.29), the complex exponential form of the Fourier
series:
=
a s-_-
_ Dy (cos(nwot) + j sin(nwot))
= Do + Dnma i D_n(cos(nwot) — j sin(nwot)) }
_ 4 (Dy + D_n) cos(nwot)
= Do + Sines ea —D_») sin(ncogt) } 6.58)
Define:
a = Do
| ees (5.59)
by =j(Pn-Dn) (n= 1,2,3,++).Frequency Analysis of Continuous-Time Signals 191
Then we obtain the expression for the Trigonometric Fourier Series:
Trigonometric fot) = ay + 32 +f
me
An COS(NWot) }
Fourier Series j
+bp sin(nwot). (5.60)
Coefficient ao is the average value of f,(t), and by does not appear in
the expression (by = 0). Equation (5.60) separates the periodic signal
fp) into its even (cosine terms and constant) and odd (sine terms)
components. Also, by replacing complex exponentials with sines and
cosines, there is no need to have any negative frequency values.
If one wishes to find the Trigonometric Fourier Series coefficients
directly from f,(¢), then the following expressions may be used (as found
from Egs. (5.57) and (5.59)):
dy = oa fo(t) dt, (5.61)
Qn = z Si, foCt) cos(naot)dt (n= 1,2,3,--), (6.62)
bn = = Sy, foe) sin(nwot) dt (m= 1,2,3,--). (5.63)
In most cases we deal with time-domain periodic signals that are real,
in which case the Trigonometric Fourier Series coefficients a,’s and b,’s
will be real. The Exponential Fourier Series coefficients are usually
complex, even when f(t) is real. These properties are shown below.
Always true: Da = 5(@n —jbn),
Din =} (dn + jn) for n> 0, (5.64)
When f, (t) is real: Gy, = 2 Re{D,} = 2 Re{D_y},
by = —2 Im{D,}. (5.65)
Example 5.7
Find the trigonometric Fourier series coefficients of periodic function
f(t) = Le-a rect(t — 4k).192 Practical Signal Processing and its Applications
: “4 0 4 :
Figure 5.20. Periodic signal f,(¢) = rect(e) + 5,(c), in Example 5.7.
The period of this periodic pulse train, shown in Fig. 5.20, is Ty = 4.
Over the period centered at = 0, the function f,(t) = rect(t). Also,
Wo = 2n/T) = 1/2. We may calculate the trigonometric Fourier series
coefficients using Eqs. 5.61-5.63:
ay = 2%, folt) at = 267 a) dt = 2a) = 0.25, (5.66)
ay (n= 1,2,3,--) =2 {47 (1) cos (Mt) at = 05
1 . nt - nt a nt
= A {sin (®) —sin (- =) = 7, sin (=), (5.67)
:
by (n= 1,23,-) =2)2,(1) sin(Ze)de=0. (5.68)
5
0s x ia a
0 5 10 Wed 20 — 30 35 40
Figure 5.21. Plot of Trigonometric Fourier Series coefficients ag-a4o for periodic signal
fo(t) = rect(t) * 5,(t) in Example 5.7.
The b,,’s are all zero because f(t) is an even function."" The spectrum
a, vs. n is shown in Fig. 5.21.
™™ The Trigonometric Fourier Series of an even signal contains only even (cosine) terms,
and hence all sine terms are missing because by, ba, by ... = 0. Similarly, an odd signal will
have ag, dy, 2, ds... = 0 in its Trigonometric Fourier Series.Frequency Analysis of Continuous-Time Signals 193
5.1.5.5 Compact Trigonometric Fourier Series
A compact form of the trigonometric Fourier series is readily obtained by
combining the sine and cosine terms for each frequency index n via
Euler’s identity. We obtain:
Gp, COS(NWot) + by sin(nwot) = C, cos(nwot + Op). (5.69)
This gives us the expression for the Compact Trigonometric Fourier
Series:
‘Compact ce
Trigonometric p(t) = Co + Dira1 Cn COS(NWot + On). (5.70)
Fourier Series
Where, when f,(t) is real:
Co = ay = Do, (5.71)
Cy = VaR + BR = 21D p| (n= 123), (8.72)
0, = Tan-* (-2) = 2D,
(5.73)
As shown above, coefficients {C, and @,} can be determined from
either {a, and b,} or D,. With C,,’s and 6,,’s determined, one would plot
the magnitude spectrum (C,, vs. n) and the phase spectrum (@,, vs. n) as is
done with the Fourier transform of a signal.
5.1.5.6 Parseval’s Theorem
A periodic signal f,(t), with period Ty = 271/wo, has power Pry = energy
per unit time:
Py = TIO’ at (5.74)
The power of signal D,e/”o* is equal to |D,|?. This is easy to show
using above definition, and is something that you would have encountered.194 Practical Signal Processing and its Applications
in your basic circuit theory course." One can also show that for two
complex exponentials at different frequencies, D,e/"@o° and D,ei™@ot
(n # m), the power of their sum is equal to the sum of their powers: °
Power{D,e/"0! + Dnei™ot} = [Dy l? + |Dml?. (5.75)
The result, although derived for the case of superposition of two com-
plex exponential signals, can be readily generalized for multiple sinusoids,
as well as for the case when the sinusoidal frequencies are not harmoni-
cally related. Now we will state Parseval’s Theorem in terms of Fourier
Series coefficients Dy:
a 2
see ne] Pr = ELglOldt = Le -colDal? (5.76)
Theorem for the
Fourier Series
Or equivalently, when f, (t) is real:
iS FO dt = CB +3 Die C2 (5.77)
Parseval’s Theorem states that power of a periodic signal f,,(¢) is equal
to the sum of the power in its sinusoidal components.
™ power =2f, [Dael*"[? dt =2 J, IDyl?|e*"|? de = [Dal J, lermt|* de =
DPX, (hat = [Dy P2 (To) = [Dal?.
© Proof Let f,(¢) = Daelnoot +Dmni elma (n#m). Py = zh, (DyelM@ot +
— gegegeg sss Cneinent +
Dnel™@0") ({Dye@0t}" + {Dyneimert} Nat = 2 2f,, Dyeloet +
Dmel™st) (Dger IMO 4 Dsse-moat)dt = ae (Duels! Dye inoot 4
DmelMootprerInoot 4 Dyeinoat ps e-lmust + D relmost pe e-imust) ge =
he (Dneir@ot Dre-inwot +. Dreimuot Dr e-imut) dp =
Ef, IDs Pelnwote-inwot de +2 Eh, [DnlPemaete -jmaot de =
IDa Pe §,, einoste=imaot at + iD, aE elmuote—jmuot dt =
Dale fe Jeon? t+ Ia I jem de = [Dal ES, (1) de +
2
[Dyn l? tf, (1) dt = ID, P= 2) + [Dal 7 (To) = IDnl? + Dal.Frequency Analysis of Continuous-Time Signals
5.1.6 Summary of Fourier transformations for continuous-time
signals
Table 5.3. Summary of Fourier transformations for continuous-time signals.
Time signal
Frequency spectrum
Continuous-Time
Fourier Transform
Continuous in
*
jut
= wy | Flore dw &
Continuous in time (CTFT) frequency
fo co | reoerrae o Fw)
Inverse Continuous-Time
Fourier Transform
(ICTFT)
fo) < F(w)
Continuous,
periodic in time
Fourier Series
(calculation of Dx)
Discrete in frequency
fo) co
hO ¢
1. i
= the Shotde —
> Te I, fe
Fourier Series
(reconstruction of f,,(t))
L DY, Deeittaar a
iS
Di
Di196 Practical Signal Processing and its Applications
5.2 Practical Applications
5.2.1 Frequency scale of a piano keyboard
The frequency domain may seem to be, upon first consideration, a strange
place to analyze signals. After all, each of us can sense the passing of time
and the amplitude-vs.-time waveform description of a signal makes perfect
sense. The piano keyboard, however, has such direct connection to the
frequency domain that we must mention it here as a practical application
of frequency-domain signal representation.
Pressing a key on the piano keyboard activates a mechanism to produce
a sound whose energy is concentrated at a specific frequency. Comparing
the frequencies corresponding to the piano keys, one will notice that they
are logarithmically spaced (with respect to key index) in frequency. That
is, the n" key frequency is a constant value times the (n — 1)" key
frequency:
Snsi = Chr (5.78)
This means that shifting to the right by m keys will increase the
frequency by factor c™:
fatm =C™ fn» (5.79)
On a standard piano keyboard the frequency doubles after moving to
the right by 12 keys (including both black and white):
Snsi2 = C? fn = fn (5.80)
From this relationship, we may solve for c and arrive at the expression:
frer = 2/1 f, = 1.0595 fy. (5.81)
Why are the frequencies of keys on a piano keyboard uniformly spaced
on a log scale? Why not uniform spacing on a linear scale? The answer is
that the human sense of hearing perceives frequency of sound on a log
scale. It is no wonder that shifting a melody by 12 notes (one octave)
results in the same-sounding “key.”Frequency Analysis of Continuous-Time Signals 197
In addition to doubling of frequency with each octave, there exist other
mathematical relationships between frequency ratios of common musical
intervals in the 12-tone equal temperament tuning method (uniform spa-
cing on a log frequency scale):
2i/ie
22/12
23/12
24/12
.0595: minor 24
1225: Major 2™
.1892: minor 3"
.2599: Major 3
25/12 = 1.3348: perfect 4
28/12 augmented 4", diminished 5"
gre perfect 5"
28/12 = minor 6"
29/12 = 1.6818: Major 6"
210/12 = 1.7818: minor 7"
211/12 = 1.8877: Major 7"
Amazingly, the human mind perceives some frequency steps as
sounding sad and others as sounding happy (minor and major intervals,
respectively). Therefore, there may be emotional information conveyed
by a pair of frequencies!
reed 6 ESE Ey Pace iced
rd cd i ord cd
523.3
Hz
Figure 5.22. Frequencies of piano keys over the middle octave, with A-440 tuning
Note that each key frequency value increases in frequency by factor 1.0595.198 Practical Signal Processing and its Applications
5.2.2 Frequency-domain loudspeaker measurement
Aside from power-handling capability, the performance of a loudspeaker
is usually characterized by a plot of its frequency magnitude response.
How is this response measured? First, the loudspeaker is placed in an
anechoic chamber (essentially a room without echoes, due to sound
absorptive materials on the walls, floor and ceiling) and hooked up to an
amplifier. A high-quality microphone, one having nearly flat frequency
response in the audio range, is placed in front and at a specific distance
from the speaker. Then the amplifier is driven by a pure sinusoid at a fixed
amplitude and frequency. The resulting microphone output signal’s ampli-
tude is measured and recorded. The measurement is repeated for different
frequencies in the audio range, and a curve is then plotted to include the
resulting data points. This curve is plotted on a dB scale (mentioned later
in this chapter). Because phase response is usually not measured,’ the
loudspeaker is not fully characterized by this measurement. Even so, the
information provided by a magnitude frequency plot is useful for rating
the quality of a loudspeaker and judging its suitability for a specific
application.
Loudspeaker systems often have three speakers: a low-frequency
woofer, a midrange, and a high-frequency tweeter. Because no single
speaker can provide an acceptable output signal over the audio frequency
range 20 to 20k Hz, the three speaker output pressure waveforms blend
together to provide a fairly-uniform output power vs. frequency. Ideally,
of course, the frequency magnitude response of the loudspeaker system
should be constant over the frequency range of interest. Here we see the
usefulness of frequency-domain concepts in a practical application.
PP Some argue that the human ear is more sensitive to magnitude than to phase, but it
depends on the situation.Frequency Analysis of Continuous-Time Signals 199
5.2.3. Effects of various time-domain operations on frequency
magnitude and phase
In this section, we consider how some basic time-domain operations
affect the magnitude and phase components of the frequency spectrum.
We base our conclusions on the Fourier transform properties given in
Table 5.2.
(a) Multiplication by a constant:
Since af (t) = aF(w), we see that multiplying by a constant in the
time domain results in multiplication by the same constant in the
frequency domain. This constant may be called the gain factor.
When the constant a is positive real, then magnitude scales by a
and phase is unchanged: |aF(w)| = |al|F(w)| = alF(w)|;
ZaF(w) = Za+ZF(w)=ZF(w). When gain factor a is
complex, then both magnitude and phase are affected: |aF(w)| =
lal|F(w)|, 2aF(w) = 2a + 2F(w).
(b) Adding a constant:
Since f,() +f,(t) © F,(w) + F,(w), we see that adding a constant
to f(t) results in the addition of an impulse function at frequency
w = 0 in the frequency domain: f(t) + a@ F(w) + a+ 2716(w).
Hence magnitude and phase are unaffected at all nonzero
frequencies.
(c) Time delay:
Since f(t — to) = F(w)e~/*°, delaying f(t) by to seconds results
in no change in the magnitude spectrum: |F(w)e7i#to| =
|F(w)||e~/”*0| = |F(w)|. The phase, however, is modified by an
% At zero frequency, magnitude becomes co and phase is meaningless.200
Practical Signal Processing and its Applications
additive frequency-dependent linear offset: {Fi (w)e7ioto} =
LF (w) + Ze~Jto = ZF(w) — wty. Note that this linear offset will
have negative slope for a time delay, and positive slope for a time
advance.
(d) Time reversal:
We will consider the case that time-domain signal f(t) is real. Since
f(-t) © F(-@) and f*(t) = F*(—a), the effects of time-reversal
may be determined by both conjugating and time-reversing f(t):
f(-t) = f*(-t) @ F(—a). Since we know that |F(«)| is an even
function of frequency when f(t) is real, we conclude that the
magnitude spectrum of real f(t) will not change from time-reversal.
On the other hand, ZF (w) is an odd function of frequency when f(t)
is real so that time reversal results in a negation of the phase
spectrum.
(e) Compression or expansion in time:
From Property #9 in Table 5.2, f(at) = F(w/a)/|a|. Assume
constant a is positive real. When a > 1 we are compressing f(t)
in time (towards t = 0) causing both magnitude and phase of F(w)
to stretch out in frequency (away from w = 0). When 0 < a < 1 we
are stretching f(t) in time, causing both magnitude and phase of
F(q) to compress in frequency.
(f) Adding a delayed version of the signal to itself:
Let g(t) = f(t) + f(t — to). How do the magnitude and phase
spectra of G(w) compare to those of F(w)? To answer this question,
we rewrite g(t) = f(t) * {5(t) + 6(t — to)}, or G(w) = F(@) x
F{5(t) + 5(t — to)} = F(w){1 + c/o}. The magnitude spec-
trum is |G(w)| = |F(@)||1 + en Foto] = 2|F(w)||cos(wty/2)|, and
we see that magnitude response is multiplied by a comb-like functionFrequency Analysis of Continuous-Time Signals 201
having zeros at w = 1/tg,3/ty ,57/ty,...™ The phase spectrum
2G(w) = Z{F(w)(1 + eJ#)} = 2F(w) + 2(1 + J") = ZF
(w) + Arg{—wto}/2. ©
(g) Subtracting a delayed version from itself:
Let g(t) = f(t) — f(t—to). How do the magnitude and phase
spectra of G(w) compare to those of F(w)? To answer this question,
we rewrite g(t) = f(t) * {6(t) — 6(t — to)}, or G(w) = F() x
F{S(t) — 5(t — to)} = F(w){1 — 7/0}. The magnitude spec-
trum is |G(w)| = |F(w)||1 — e~/#*| = 2|F(w)||sin(wtp/2)|, and
we see that the magnitude response is multiplied by a comb-like
function having zeros at w = 0,27/ty ,47/to,..." Phase spectrum
2G(w) = 2{F(w)(1 — e~/#')} = ZF(w) + 2(1 — eI) = ZF
(w) + Arg{n — wto}/2."
5.2.4 Communication by frequency shifting
One of the most important applications of signal processing is communi-
cations, which is the transfer of information over distance. This is espe-
cially true in wireless communications, because typically the message
signal is converted to a higher frequency range for efficient transmission
using electromagnetic waves.
[1+ e7/ote| = (OF cos(wto))? + Csin@otp))? =
VI +2 cos(wto) + cos*(wty) + sin®(wty) = Y2 + 2 cos(wty) =
V4 c0s*(wt,/2) = 2\cos(wty/2)]
55 2(1 + e-J#*) can be shown to equal —1/2 < Arg{—wto}/2 < 1/2, which is half the
principal value of —wty.
* [1 —e7Jote| = VO = cos(wto))? + Gin(wte))? =
1-2 cos(wt) + cos* (wtp) + sin? (wtp) = 2 — Zeos(wtp) =
V45in®(@t,/2) = 2|sin(wto/2)|
w 2(1 — e~J#to) can be shown to equal —1/2 < Arg{m — wty}/2 < 1/2, which is half
the principal value oft — wtp,202 Practical Signal Processing and its Applications
Consider the magnitude spectrum |M(w)| = A(w/2W), pictured in
Fig. 5.23, which we will use to model a message signal (e.g. audio or
video). This is called a baseband signal since its spectrum represents the
signal in its original form. The energy of M(w) is distributed over
frequency range (bandwidth) |w| < W rad/sec. Is it possible to shift the
information that M(w) represents to a higher frequency range, the
transmission band, and then back to baseband, without any loss of
information? The answer is yes, when the bandwidth of M(w) is finite as
we have assumed.
ee
0 | o
=2wy Wy 0 ®y 2W
Figure 5.23. Sample baseband spectrum M(w).
In the time domain, represent the message as m(t) = F-'{M(w)}.
Since our goal is to shift M(w) to a higher frequency range, we notice the
frequency shift property of the Fourier transform (#7 in Table 5.2):
m(t)esot <> M(w — wo).
This does exactly what we need, but the problem is that in the real
world we cannot generate e/“o'. Instead, some real signal must be used.
The solution is to instead multiply m(t) by cos(wot) = +e/#t +
Ze Soot, This is the modulation property (#5 in Table 5.2):
M(t) cos(wot) > $M(w — Wo) + $M(w + wo). (5.82)
Thus, by multiplying message signal m(t) with a high-frequency
sinusoid cos(wot) we shift a scaled version of baseband spectrum M(w)
up and down in frequency, as shown in Fig. 5.24. The closeness of
transmission band center frequency wo to the baseband is exaggerated
here; normally @, >> bandwidth of m(t). Note that we have achieved our
goal of shifting an exact copy of the information in signal m(t) to a higherFrequency Analysis of Continuous-Time Signals 203
A/2
Re
=2W) Wy 0 ® 20
Figure 5.24. Spectrum of m(t) cos(wot), which is }M(w — wo) +3.M(w + wo).
frequency range! The cosine used here is called the carrier wave, because
its amplitude now reflects changes in m(t) and thus carries its information
over the transmission frequency band. In general, multiplying m(t) by a
high-frequency carrier is called Amplitude Modulation (AM).
To recover m(t) from the signal sent to the receiver, m(t) cos(wot),
one performs another modulation operation (that is called demodulation):
{m(t) cos(wot)} {2 cos(wot)} = m(t) + 2. cos*(wot)
= m(t)(1 + cos(2wot))
= m(t) + m(t) cos(2wot), (5.83)
whose frequency domain spectrum is
= M(w) +3 M(w — 20) + 5 M(w + 2»), (5.84)
as shown in Fig. 5.25:
2 Wo Wo 2w»
Figure 5.25. Sample baseband spectrum M(w).
The spectral copies at +2wo are easily eliminated using a /Jowpass
filter” What remains is the original baseband spectrum M(w), and our
amplitude modulation system has succeeded in recovering message signal
m(t) from the amplitude-modulated carrier signal m(t) cos(wot). Other
~” Analog lowpass filtering is covered in Ch. 8204 Practical Signal Processing and its Applications
modulation methods change either the instantaneous carrier frequency or
its phase in direct proportion to m(t).*” In a modern approach, digital
communication systems first convert an analog message signal into a
stream of 1’s and 0’s using analog-to-digital conversion (see Ch. 6) before
modulating a carrier with binary-coded information.
5.2.5 Spectral analysis using time windowing
Assume signal x(t) is only known over some finite-width time interval
t, St 15(w + Wo) + 15(w — Wo) = 16(w + Wo)
rect(t/T) = Tsinc(wT /2)
+ cos(wot) rect(t/T) Esinc(wT /2) *6(w + Wo) = X(w),
or X(w) = Esine((w + W)T/2) + Esinc((w —Wo)T/2). (5.85)
Noting that X(w) = 16(w + wo) has its energy at frequencies two,
estimated spectrum X(w) differs in that its energy is smeared to either side
of +Wo (see Fig. 5.26). This is called the time windowing effect in spectral
estimation: instead of x(t) we analyze x(t)w(t), which distorts our
spectral estimate because X(w) is convolved with W(w): *
x()w(t) = X(w) +W@) (5.86)
IX@)]
first sidelobe at
21.7% of peak
®o
Wp + 2n(1.43/T)
Figure 5.26. Magnitude spectrum of X(w) = F{cos(wot) rect(t/T)} (sinusoid multiplied
by a rectangular time window) shown near w = wo, from Example 5.8.
The windowing effect may mislead us to believe that multiple sinusoi-
dal components are present, despite x(t) having energy only at Wo, due to
* In this example, x(t) = cos(wot) and w(t) = rect(t/T).
¥ Property #4 in Table 5.2 (p. 177)206 Practical Signal Processing and its Applications
the sidelobes of energy that appear by convolving with a sinc function that
is W(q) in this case.
The distortion caused by the windowing effect depends on the shape
and width of time window w(t), and on the nature of x(t) whose spectrum
is being estimated.” In general, the wider w(t) is in time — the better.
In most applications, another desirable characteristic of windowing
function w(t) is that it tapers gradually to zero at both ends. We demon-
strate this concept by replacing the rectangular time window with a
triangular one when estimating the spectrum of x(t) = cos(wot):
1X(@)|
first sidelobe at
4.7% of peak
wy + 2n(2.86/T)
Figure 5.27. Magnitude spectrum of ¥(w) = F{cos(wot) A(t/T)} (sinusoid multiplied
by a triangular time window) shown near @ = wo, from Example 5.8.
Different windowing functions have been proposed for spectral
estimation purposes, each having specific frequency-domain characteris-
tics (such as main lobe width, first sidelobe frequency and height) that
should be considered for each application.
* In some cases, the windowing effect is negligible (when x(t)w(€) = x(¢)).
8 These windows are typically defined in the discrete-time domain so that computer-based
methods may be used to calculate the Fourier transform; our example is given in the
continuous time domain to simplify the explanations.Frequency Analysis of Continuous-Time Signals 207
5.2.6 Representing an analog signal with frequency-domain
samples
Chapter 6 introduces the theory and applications of sampling a continuous-
time signal x(t) to obtain a list of numbers, and then, if certain conditions
are met, recovering x(t) from that list. This amazing concept can also be
seen from a frequency domain perspective, which we examine here.
Consider finite-length signal x(t) that equals zero outside of the time
interval [— 7/2, 7/2]. Thus x(t) = x(t)rect(t/T)). Create periodic
signal xp(t) by convolving x(¢) with impulse train 57, (¢):
Xp(t) = x() * 57,(0). (5.87)
Next, because of its periodicity, we may decompose xp(t) into a
weighted sum of complex exponentials (Exponential Fourier Series),
Xp(t) = LRe-m DneIrot, (5.88)
where Wo = 27/Tp. A solution for the Fourier series coefficients D, was
derived in Section 5.17 (Eq. (5.57)):
1 (To/2
Pn = FJ ter2
Xp(theInot de, (5.89)
Although exact solutions for coefficients D,, via integration may not be
possible, numerical approximation methods can make this a practical
approach. Noting that
x(t) = xp(C)rect(t/T>) = {L%-co Delo" }rect(t/To), (5.90)
we see that the original finite-length signal x(t) is exactly recovered from
the frequency-domain coefficients D,,, which themselves are derived from
p(t) and hence x(t). Waveform x(t) and values D,, are therefore inter-
changeable when representing the signal’s information.
bbb Assume there are no impulse functions located at the endpoints of this interval.208 Practical Signal Processing and its Applications
Consider a special case: x(t) is such that D,, = 0 for Vn such that |n| >
M. This means that a finite number of values can be used to represent the
infinite number of signal amplitudes x(t,): t, € [—T)/2,T)/2]. How
is this possible? The answer is: in this case x(t) does not contain the
maximum information possible for a pulse having width Ty seconds (there
is redundancy). Therefore, only 2M + 1 coefficients (D_y through D,y)
represent its information in the frequency domain.°*
This example has demonstrated the concept of storing a continuous-
time signal’s information in a discrete format. It is a form of frequency-
domain sampling. This is seen from Eq. (5.56), rewritten here in terms of
x(t):
Dn = =X (nwo) = FFX Ho=nwo- (5.91)
0 0
We see that D, is spectrum X(w) sampled at frequency w = nw» and
scaled by amplitude factor 1/Tp.
5.3 Useful MATLAB® Code
In this section, we will concentrate on the practical task of plotting the
spectrum of signal A(t). The phase and magnitude components of
frequency this spectrum F{h(t)} = H(w) = |H(w)|e/4#) are usually
plotted separately. When h(t) is real (as is typically the case), then |H(w)|
is even and ZH(w) is odd, so that no additional information is gained by
plotting these components over both positive and negative frequencies.
Thus, we normally plot only the positive frequency sides of the magnitude
and phase spectra.
Example 5.9
Plot the magnitude and phase of H(w) = jw/(5 + jw) over frequency
range 0 < w < 20 rad/sec.
© When x(t) is real then D_y = Dj , which means that only Dy, for k > 0 need to be
stored,Frequency Analysis of Continuous-Time Signals
% Example_Ch5_9
Npts = led;
w = linspace(0,20,Npts) ;
H = (j*w)./(5 + jw);
figure; plot (w,abs(H))
xlabel (‘frequency in rad/sec')
ylabel ("magnitude of H(\omega) ')
title ('H(\omega) = j\omega/(5+j\omega) ')
grid on
He) = ull
209
‘magnitude of H(w)
0 2 4 6 8 o 2 4 16
frequency in radisec
Figure 5.28. Plot of |H(w)| = |jw/(5 + jw)| vs. w, in Example 5.9.
figure; plot (w,angle(#))
xlabel (‘frequency in rad/sec')
ylabel (‘angle of H(\omega) in radians’)
title('H(\omega) = j\omega/(5+j\omega) ')
grid on
H(u) = joll5+iw)
angle of H(w) in radians
0 2 4 6 8 10 12 14 16
frequency in rad/sec
Figure 5.29. Plot of ZH(w) = 2(jw/(5 + jw)) vs. @, in Example 5.9.210 Practical Signal Processing and its Applications
We often plot magnitude |H(w)| on a logarithmic scale.“ Because
log(0) = —o0, at frequencies where |H(w)| =0 this type of plot is
meaningless. At frequencies where |H(w)| > 0, however, a logarithmic
compression makes it possible to display both very large and very small
magnitudes together on the same plot. In MATLAB®, the y-axis is dis-
played logarithmically using function semilogy.
Example 5.10
Plot the magnitude of H(w) = 10/(10 + jw) on a log scale, over fre-
quency range 0 < w < 100 rad/sec:
% Example_Ch5_10
Npts = le4;
w = linspace(0,100,Npts) ;
H = 10./(10 + 3*w);
figure; semilogy (w,abs (H))
xlabel ("frequency in rad/sec')
ylabel ('|H(\omega) |')
title ('H(\omega) = 10/(10+j\omega) ')
grid on
Hla) = 10/0104) |
10° r r
IH)
4 1 .
ps ee ee ee, ee, ee
frequency in radisec
10
Figure 5.30. Plot of |H(w)| = |10/(10 + jw)| ona log scale vs. «w, from Example 5.10.
Another method commonly used to logarithmically display magnitude-
squared spectrum |H(w)|? is to first map it to the decibel scale:
|H(w)|? (dB) = 10 logio|H(w)|? = 20 logio|H(w)|. Note that |H(w)|
444 Phase angle is always plotted using a linear scale, typically over the principal value
range —1 < ZH(w) < m radians (or —180° < ZH(w) < 180° degrees),Frequency Analysis of Continuous-Time Signals 2
and |H(w)|* are usually dimensionless, and thus the decibel scale also
represents a dimensionless value. Decibels are not units; instead, the dB
label is only used to remind us of the logarithmic mapping that was done.
Example 5.11
Plot the magnitude-squared value of H(w) = 10/(10 + jw) in dB, over
the frequency range 0 < w < 100 rad/sec:
% Example _Ch5_11
Npts = le4;
w = linspace(0,100,Npts) ;
H = 10./(10 + j*w);
figure; plot (w,20*10g10 (abs (H)))
xlabel (‘frequency in rad/sec')
ylabel ('|H(\omega) |*2 (dB) ')
title('H(\omega) = 10/(10+3\omega) ')
grid on
Hw) = 10M(T0F40)
o
0 40 20 ised 50 60 70 80 90 100
frequency in rad/sec
Figure 5.31. Plot of |H(w)|? = |10/(10 + jw)|? in dB vs. «w, from Example 5.11.
Another common display method is the log-log plot: both the magni-
tude scale (y-axis) and the frequency scale (x-axis) are compressed
logarithmically when plotting. This has the effect of linearizing curves
such as 1/w vs. w and thus making the plot easier to use for extrapolating
data. (For this plot to be useful we cannot include points where either
|H(@)| = 0 or w = 0, however.)
In MATLAB®, both x and y axes are displayed logarithmically using
the function Loglog. The example below also uses function logspace212 Practical Signal Processing and its Applications
instead of linspace when initializing frequency values so that the plot
has sample points that are uniformly spaced along the log-frequency axis.
Example 5.12
Plot the magnitude of H(w) = 10/(10 + jw) ona log-log scale, over the
frequency range 0.1 < w < 100 rad/sec:
% Example_Ch5_12
Npts = le4;
w = logspace (1og10 (0.1) ,1og10 (100) ,Npts) ;
H = 10./(10 + 5*w);
figure; loglog(w,abs (H))
xlabel('frequency in rad/sec')
ylabel ('|H(\omega) |')
title('H(\omega) = 10/(10+3\omega) ')
grid on
10? H(w) = 10/(104}2)
IH(2))
10"
10° 10° frequency in rad/sec 10” 10?
Figure 5.32. Plot of |H(w)| =|10/(10 + jw)| vs. w using a log-log scale, from
Example 5.12.
The same result is achieved by plotting |H(w)|? in dB vs. w on a log
scale. In Example 5.13, we take advantage of the MATLAB® semilogx
graphing function.
Example 5.13
Plot the magnitude-squared value of H(w) = 10/(10 + jw), in dB, vs. w
ona log scale, over frequency range 0.1 < w < 100 rad/sec:
% Example_ch5 13
Npts = 1e4;
w = logspace (1og10 (0.1) ,10g10 (100) ,Npts) ;Frequency Analysis of Continuous-Time Signals 213
H = 10./(10 + 5*w);
figure; semilogx (w,20*1o0g10 (abs (H) ))
xlabel('frequency in rad/sec')
ylabel ('|H(\omega) |*2 (dB) ')
title('H(\omega) = 10/(10+3\omega) ')
grid on
Hw) = 10/1044.)
28
10" 10!
frequency in radi/sec
Figure 5.33. Plot of |H(w)|? = |10/(10 + jw)? in dB, vs. @ on a log scale,
from Example 5.13.
In practical applications engineers rarely specify frequency in rad/sec; f
(Hz) = w/2r is the frequency unit of choice. We repeat the previous
example on a Hz frequency scale.
Example 5.14
Plot the magnitude-squared value of H(f) = 10/(10 + j2zf), in dB, vs.
f ona log scale, over frequency range 0.01 < f < 20 rad/sec:
% Example_ch5 14
Npts = le4;
£ = logspace(1og10 (0.01) ,10g10 (20) ,Npts) ;
w = 2*pire;
H = 10./(10 + 3*w);
figure; semilogx(£,20*1og10 (abs (#) ))
xlabel (‘frequency in Hz')
ylabel ('|H(£)|*2 (dB) ')
title('H(£) = 10/(10+32\pif) ')
grid on214 Practical Signal Processing and its Applications
H() = 10/(10425f)
(oa)
40? 107 10° 10! 10?
frequency in Hz.
Figure 5.34. Plot of H(f) = 10/(10 + j2mf) in dB, vs. f on a log scale, from Example
5.14.
Finally, Example 5.15 demonstrates plotting phase response in degrees
instead of in radians.
Example 5.15
Plot the phase of H(f) = 10/(10 + j2zf), in degrees, vs. f ona log
scale, over frequency range 0.01 < f < 10 KHz:
% Example_Ch5 15
Npts = le
£ = logspace (1og10(0.01) ,1og10(10e3) ,Npts) ;
w= 2*pire;
H = 10./(10 + 5*w);
figure; semilogx(£,angle (H) *180/pi)
xlabel (‘frequency in Hz')
ylabel('phase in degrees')
title('H(£) = 10/(10+32\pif) ')
grid on
H(t) = 10/(10+)2f
a Kf) = 101 104/24)
phase in degrees
‘0 1? to"
frequency in Hz
Figure 5.35. Plot of ZH(f) = 2(10/(10 + j2mf)) in degrees vs. f on a log scale, in
Example 5.15.Frequency Analysis of Continuous-Time Signals 215
5.4 Chapter Summary and Comments
e The CTFT (Continuous-Time Fourier Transform) is defined as
FEO} = J, fede = F(w).
¢ Strictly speaking, F(w) does not exist when the integral expression
defining it does not converge at one or more frequencies. In some
cases, we can get around this problem by including Dirac delta
functions in F (w) at frequencies of non-convergence.
e When F(w) exists, it contains the same information as does f(t) only
in a different domain.
e The Inverse CTFT is defined as f(t) = 2K, F(w)e/”*dw.
¢ The CTFT of a convolution product in time may be calculated as a
multiplicative product of spectra.
e The CTFT of a multiplicative product in time may be calculated as a
convolution product of spectra.
e Tables of Fourier transforms and their properties make it possible to
transform basic signals from time domain to frequency domain (and
from frequency domain to time domain) using substitutions instead of
integrations.
¢ When f(t) = f,(¢) is periodic, then F(w) is discrete — that is,
composed of impulses that are uniformly spaced in frequency. The
areas of these impulses in F(w) convey the information of f, (¢).
¢ The transformation between one period of f,(t) to the Fourier Series
coefficients D, (= impulse areas in CTFT{f,(t)} x 27) is called the
Exponential Fourier Series.
e The Trigonometric Fourier Series is a different form of the Exponential
Fourier Series.216
Practical Signal Processing and its Applications
¢ When f,(t) is real, the Trigonometric Fourier Series expression may
be written using only real numbers and non-negative frequency values.
¢ The Compact Fourier Series is a special form of the Trigonometric
Fourier Series; it is used only when f, (t) is real.
5.5 Homework Problems
P5.1
PS.2
P53
P54
Calculate the CTFT of each f (t) via integration — using the
definition Ff (t)} = J, f(QeFrde:
a) F(t) = 35(t)
b) f(t) = 6(¢-1) + 60+ 1)
c) f(t) = rect(t)
4) Ff) =A
Calculate the ICTFT of each F(w) via integration — using the
definition F~1{F (w)} = (1/27) ce. F(w)e!“*da:
a) F(w) = 6(w)
b) F(w) = 6(w — 2/2)
c) F(w) = 6-1) + d(@ +1)
d) F(w) = rect(w)
Given: f(t) = F(@), g(t) = G(w). Using the property of
linear superposition, find the Fourier transforms of each of the
following, expressed in terms of F(w) and G(w):
a) ft) +90
b) 3f(t) — 29(6)
o) fF) +6) g(t)
d) FO{1 + gS(e- 1}
By applying linear superposition to the CTFT pairs given in
Table 5.1, find Fourier transforms of each of the following:
a)1-—u(t)+6(t)
b) 2u(t) — 6(t) - 1
c) e~?*u(t) + e?*u(—t)PS.5
P5.6
Frequency Analysis of Continuous-Time Signals 217
d) 0.5 6(t — 10) + 0.5 6(t + 10)
e) j cos(10t — 2/2)
f) sin(2t) — j cos(2t)
8) (rect(t/5) — A(t/6))/2
h) 2sinc(nt) + sinc? (mt /2)
i) 26(t) + sinc?(at/2)
J) 54(0) - 520)
By applying properties of the CTFT given in Table 5.2, find
Fourier transforms of each of the following:
a) e3-Dy(t -D
b) 6(t + 1) * sgn(t)
c) u(3t)
d) cos(t) cos(2t)
e) -2jel2t-D
f) (1/27) sinc(t/4) cos(t/2)
g) rect(t/2)e~/2¢
h) sinc(zt) * sinc(at/2)
i) sinc(at/2) sinc(mt/2)
j) sinc?(wt/4)
k) u(-t)
1) cos(at/4) e/7/*
m) sgn(t) — sgn(t — 1)
1) p= co (8/100) dB
By applying the CTFT pairs and CTFT properties given in
Tables 5.1 and 5.2, find the inverse Fourier transforms of each
of the following:
a) 5(w)
b) d(w- 2)
c) (1/4) rect(2w/m)
d) sinc(2.5w)
¢) e-/50
f) 2/50 4 9 isa
g) cos(Sw)
h) 1/o218
P5.7
P5.8
Practical Signal Processing and its Applications
i) 2/jw) * ban (w)
J) F2nj3()
Given: Ol dB = 1. What must be the area of
\F(w)/??
What is the energy of sequence sinc(at/2)?
P5.9 What is the energy of sequence sinc(mt/2) + sinc(mt/4)?
P5.10
PS.11
P5.12
P5.13
PS.14
What is the power of periodic sequence rect(t/4) * 529(t)?
What is the power of periodic sequence A(t/4) * 529(t)?
Calculate the Exponential Fourier Series coefficients D, of each
f(t) given below using D, = (1/Tp) 5, fy(te ot dt (your
first task will be to determine Tp):
a) fo(t) = 54(t)
b) fp = d4(t - 2)
©) f(t) = 6(t — 3) * d4(0)
d) f,(t) = rect(t) « 54(¢)
Calculate the f(t) of each set of Fourier Series coefficients D,,
given below using definition f(t) = Lit=-co Dn e/n@ot where
Ty = 3 seconds:
a) Dy, = 1, all other D, =0
b) D, = sinc(2mn/3)
c) Dn = sinc? (mn/4)
d) D, =1/2, vn
Given: H(w) = 2 (e/” — 0.95) /(e/” — 0.90). Plot |H(w)|?
over frequency range 0 < w < 2r, using a dB vertical scale.Chapter 6
Sampling Theory and Practice
6.1 Theory
6.1.1 Sampling a continuous-time signal
A list of numbers specifying signal values measured at specific times
defines “samples” of the signal, and obtaining these samples from the
continuous-time signal is called “sampling.” Sampling theory is the
mathematical description of the effects of sampling, in both the time and
frequency domains. Thus, the relationship between continuous-time and
discrete-time signals may be described using sampling theory. Converting
an analog signal to a digital signal not only involves sampling in time, but
also rounding amplitudes to one of a finite number of levels; this last step
is called amplitude quantization, which we discuss later in this chapter.
Sampling and quantization operations, taken together, form the basis for
Analog-to-Digital Conversion, which is usually implemented using speci-
alized electronic circuits.
It may seem odd that we would want to convert a continuous-time
waveform, which specifies a signal’s values at all times, to a list of
samples taken at discrete times. But the reason we do this is to make
possible more accurate storage/recovery, transmission and processing of
the information conveyed by that signal. And, when certain conditions
hold, sampling theory states that we can exactly reconstruct the signal’s
continuous-time waveform from only the knowledge of its samples.
Therefore, under those conditions no information is lost because of
sampling.
=220 Practical Signal Processing and its Applications
To begin our theoretical discussion, let us assume that samples are
taken at uniformly-spaced instants of time and that one of those times is
t= 0. Under these assumptions it is convenient to model the act of
sampling as multiplication with a train of impulses having spacing T; sec
on the time scale:
x(t) x 57,(t) = x(t) Do O(t — nT)
= Lne-oo X(nT,)5(t — nT). (6.1)
We see that the area of impulse located at time nT, sec is x(nT,).
Clearly the list of values corresponding to x(t)|=nr,,7 € Z, is the list of
areas of the impulses in x(t)67,(t).*. Therefore the sifting (also called
sampling) property of the Dirac impulse function makes it possible to
extract the values of x(t) at specific times and preserve them as areas of
impulses. It should be noted that sampling a signal by multiplying with
an impulse train is only a convenient mathematical model for us to use, for
impulse functions and impulse trains do not exist in real life.
Instead of writing “x(nT,)” to describe the list of samples, we may
instead write “x(n).” This does not necessarily mean that T; = 1 sec, just
that we now refer to each sample using its index value n and the time that
sample was taken is implied to be nT; sec. The entire sequence of sampled
amplitude values of x(t) is labelled x(n), which is a list of values and
each value has a specific time association.
Let us consider the frequency-domain effects of sampling. Recall from
Chapter 5 that the Fourier Transform of an impulse train is another impulse
train:
57,(t) @ Ws5y,(@) (@, = 2n/T,). (6.2)
Also, recall that multiplication in time gives convolution in the
frequency domain:
* The areas of impulses in 67,(t) are all 1Sampling Theory and Practice 221
xO & = (X(w) *¥)). (63)
It then follows that
F(x) x 57,0} = + (F&O) « F{57,(0})
= = (x@) * 56,,(0))
=F (X@) + 8,,@)) =F KW) + BP -n S(W — ke)
= = VR X(w — kws), oF
X(O) x 67, (E) © 7 Dike -co X(w — kids). (we = 2n/T). (64)
The infinitely many frequency-shifted copies of X(w) add together to
give a periodic spectrum, having period w, rad/sec. Thus, sampling in
time produces periodicity in frequency.
6.1.2 Relation between CTFT and DTFT based on sampling
We now have sufficient theoretical background to mathematically relate
continuous-time and discrete-time Fourier transforms using sampling
theory. Given a continuous-time Fourier transform pair f(t) @ F(w),
sample f(t) at a rate of one sample per second (T, = 1 sec):
FOS) = FO Ti=- 6(t — n(1))
= Lne-o f(t) S(t — n) = Yin=-w f(Md(t — n). (6.5)
Normally at this point we would begin calling the list of impulse areas
the “sequence f (n)” and switch over to discrete signal analysis. However,
for the purpose at hand, we choose to view Lw=-« f(n)d(t —n) as a
continuous-time signal and find its Fourier transform:
Fdita— f(M)5(t — n)} = Liao FEF (M)d(t — n)}222 Practical Signal Processing and its Applications
= Ynte-oo fn) F{8(t — 2)} = Vin=-o f(n) eI", or
FO)6,() @ Lre-o f(r) e1". (6.6)
You will notice that the summation expression Y°__.. f (n) e~/” is
nothing else but the discrete-time Fourier transform, as defined in Ch. 4!
In other words, the spectrum of sequence f(n) may be thought of as the
spectrum of f(t)6,(t), where f(t) is a continuous-time signal that when
sampled once per second gives us the samples of sequence f (n).
Because sampling in time was shown to result in a periodic spectrum,
in this case the periodicity must also exist. The period in frequency is
@s = 2m rad/T; sec = 27 rad/sec (T; = 1 sec), which explains why in
Ch. 4 the discrete-time Fourier Transform was shown to always give a
periodic spectrum F(e/”) having period 2. The discrete-time Fourier
transform, or DTFT, may be thought of as the continuous-time Fourier
transform of an analog waveform that had been sampled at a rate of one
sample per second. Conversely, the discrete-time Fourier transform sum-
mation may be used to calculate the spectra of sampled continuous-time
signals, because the integral of a weighted impulse train simplifies to a
summation of terms.
If we sample f(t) using T, # 1 sec, may we still find F{f(t)57,(6)}
using the DTFT? Yes — it is easily shown:
FF (O)S57,(O} = Lew f (nT) eH"
= DR gn) e Fonts = G(ei@™), (6.7)
where sequence g(n) is the result of sampling f(t) every T; seconds. The
resulting spectrum is DTFT {g(n)} scaled in frequency so that its period is
2n + (1/T; ) = 20+ (w,/2m) = ws rad/sec. To accomplish this, we sim-
ply re-label the frequency axis accordingly.
Figure 6.1 shows relationships between the continuous-time and
discrete-time Fourier transforms that are based on sampling. Further, note
that sampling theory also plays a role when the Fourier transform of aSampling Theory and Practice 223
periodic signal is calculated. This is because a periodic signal may be
modeled as a pulse that is convolved with an impulse train, and the
corresponding frequency-domain effect is a multiplication of the pulse
spectrum by an impulse train: this is sampling in the frequency domain!
Fourier Series: fy(t) = Dy
f(t) @ 2n D,5(w — kw)
ren Do ew
Assumes f(t) = 0
0,7,
50 >-@) tae oe W5.(W)
(w = 2n/T)
F(t) oe Fw)
Continuous Time
Fourier Transform
5 +) Discrete Time Ox San)
Fourier Transform:
oy fer ee) |
>, reser =m) F(e*)
te
50) Ce wba, (w)
Assumes f(n) = 0
a (wo = 2n/N)
N-1 ¥ fie ¥
fo) = D FM sn(t-n) @ > WF p(k) S2n(w — kao)
n=0 k=0
DFT (and FFT): f,(n) = F,(k)
Figure 6.1. Sampling-based Fourier transform relationships.224 Practical Signal Processing and its Applications
6.1.3 Recovering a continuous-time signal from its samples
6.1.3.1 Filtering basics
Chapter 8 will formally introduce this concept, but for now here are the
basics: multiplying the spectrum of a signal by a function that allows some
frequency components to pass while blocking other frequency components
is called filtering. An ideal filter is one that passes the desired components
without any change (multiplies them by 1), and eliminates the undesired
components (multiplies them by 0). Hence, multiplication by a rect
function of frequency is an ideal filtering operation. Unfortunately, this
ideal may not be achieved in practice” so that only approximations to ideal
filtering may be implemented. An example of a non-ideal filter frequency
response is a bell-shaped pulse of peak height 1 at frequency wo. In the
frequency range near Wg the bell curve is approximately equal to 1; this is
called the passband of the filter. In the frequency ranges on either side of
and far away from wo the bell curve is approximately equal to 0; this is
called the stopband of the filter. Between those two frequency ranges we
have the transition band, where (according to user definition’) the filter
response is neither effectively passing nor effectively stopping the signal.
Finally, a filter’s bandwidth is the width of its passband (as measured only
over positive frequencies).
6.1.3.2 Frequency domain perspective
Once a continuous-time signal f(t) is sampled to give sequence f(n),
these samples may be processed using numerical methods to give some
useful result,‘ but in many cases it is desirable to recover the original signal
» Since, as explained in Ch. 8, such a system is not causal and has infinite complexity.
© The user will specify constants {a, b} so that the passband is defined as the frequency
range over which |H(w)| > a (a is just less than 1), and that the stopband is defined as the
frequency range over which |H(w)| < b (b is just greater than 0).
4 As an example, to identify a speaker’s identity from an analog recording of his/her speech.Sampling Theory and Practice 225
by reconstructing it from its samples.* Let us see under what conditions
the original analog signal may be reconstructed from its uniformly-spaced
samples in time.
Assume that F(w) = F{f(t)} equals zero outside frequency range
—Wmax < @ < Wmax. Even if this assumption is not exactly true, when
IF (w)/? is negligible when |w| > @max we say that signal f(t) is band-
limited to frequencies below Wnqx (referring to positive frequencies) then
the following is, for all practical purposes, true. When f(t) is sampled in
time to give f,(¢) = f(t)57,(¢), the resulting spectrum is made periodic:
KO © Bw) = 7 F@) + bu,(w)
= lke F(w— kis) (ws = 2n/T,). (68)
a Fo) = FF(O}
1 444 Fe
—2w, -1.50, —w, —0.50,, 0 | 0.5, Ws 1.50, 2W, 2.50,
—Wmax “max
(b) RW) = FF Sr}
A/Ts
T 1 —t >w
—2w, -1.50; —w, -0.50,) 0 |o: Ws, Ws 150, ws 2.5w,
—®max Ymax
Figure 6.2. Signal spectrum before and after sampling
In other words, sampling f(t) every T, seconds creates copies of the
original (baseband) spectrum F(w) and places them at every integer
multiple of w, in the frequency domain (and scales these copies by 1/T;,
© This is the case in mobile telephony, where the original analog speech signal is sampled,
processed to reduce the number of bits per second needed to be transmitted, and then
reconstructed to once again be an analog speech signal at the receiver.226 Practical Signal Processing and its Applications
but that is not so important). Figure 6.2 shows the spectrum of f(t) before
and after sampling (where Ws > 2@max). Note that, in this plot of
spectrum F,(w), there is no overlap between copies of the original
spectrum F(w) after they were shifted to multiples of w, and added
together. This makes it easy to recover the original signal f(t) from its
sampled version f,(t): simply pass f,(t) through a “lowpass” filter that
blocks all but the copy of F(w) that is centered at w = 0 (that is, the
original baseband spectrum’). If ideal, the lowpass filter would need to
have a bandwidth of anywhere between @max and Ws — Wmax (anda gain
factor of T,) to exactly recover f(t) from f(t), as shown in Fig. 6.3.
(@)_H(@) = Trect(w/20max):
Ts
min. LPF BW
— ¢— max. LPF BW
0 T T
1 1 1 T
-20, -150,-w, -05w 0 0.50; Ws 15w, 20, 250,
) RO) = F(FOS, 0}:
: f\ f\
0 >w
T T
-2w,-150; —w,|-050 0 05w,
>w
T T
@; 150, 20, 2.5W,
—Ws+Omax Os-Omax
(© Fw) =H)K():
A
0 T
— —#y —
-2w, -15w,-w, —0.5a, 0 | 050, @, 150, 20, 250,
—Wmax “max
Figure 6.3. Ideal lowpass filtering to recover F(w) from F,(w).
f Baseband is a name given to the frequency range of the original spectrum before it was
processed (in this case, by sampling)Sampling Theory and Practice 227
This lowpass filter is called the reconstruction filter because it
reconstructs the original f(t) from its sampled version f,(t).* In Fig. 6.3
the reconstruction filter is assumed to be ideal. Because there is a gap
between adjacent shifted copies of F(w) within F,(w), the reconstruction
lowpass filter frequency response need not be ideal so long as its passband
gain is constant and its stopband gain is zero. An example of this is shown
in Fig. 6.4:
T T Jl J Lo
T 1 1
-2w, -1.50; -w,|-05w, 0 05w;| ws, 15w, 2v, 25W,
(a) H(@),B@):
—;+Omax Os—Omax
(b) F(w) = H()F,(w): passband
stop transition -A transition stop
band band band band
T ; wo
——— a
2, -150; —w, -05w,| 0 | 050, Wy 150; 2, 25w,
—®max @max
Figure 6.4. Non-ideal lowpass filtering to recover F(w) from F,(w)
Consider the case that the sampling frequency w, value is adjustable.
When using an ideal reconstruction LPF, what is the minimum value of
ws for which we could still recover the original signal f(t) from its
Recall that f,(¢) is an impulse train — the area of each impulse specifies the value of f(t)
at that impulse location in time. This is our mathematical way of representing what
happens when the information of f(t) is retained at only the sampling time instants. In
practice the sample values are stored as binary numerical codes. So, to reconstruct f(t),
what needs to be done is to first synthesize an approximation to the impulse train signal
f(t) from the list of binary numerical codes, and then pass f,(t) through the reconstruction
lowpass filter,228 Practical Signal Processing and its Applications
sampled version f,(t)? The answer depends on the bandwidth of f(t).
When the highest frequency component within f(t) is @max, then it must
be sampled at a frequency of at least w, = 2Wmax, or otherwise the
destructive overlapping of spectra will occur within F,(w). The spectrum
of f,(t) in the case that w, is at its lowest possible value to permit exact
recovery of f(t) is shown in Fig. 6.5.
H(o), R(@):
T T T T ro
—2w, Ws 0 @; 2,
—W5/2= —Wmax —— Wmax = Ws/2
Figure 6.5. Ideal lowpass filtering of F,(w) to recover F(w), when ws = 2Wmax-
The minimum sampling frequency where exact signal recovery is
possible, or W; = 2Wmax, is called the Nyquist Sampling Rate. Should w,
be selected to be lower than the Nyquist sampling rate, the destructive
overlapping of adjacent spectra would take place; this type of distortion is
called aliasing.
R@):
-20, ws 0 Ws 2w,
max Wmax
Figure 6.6. The case where w; < 2@max, producing aliasing distortion.
So, we should take the bandwidth of f(t) (=@max) into account when
selecting w,. Choosing Ww, < 2Wmax results in aliasing distortion and thisSampling Theory and Practice 229
makes it impossible to recover f(t) from f,(t)." Too few samples per
second (= w;/27) are taken, which is called undersampling the signal
f(t). On the other hand, choosing @, > 2Wmax will prevent aliasing but
will also produce a sequence of samples that contain redundancy. This is
called the oversampled case (as shown in Figs. 6.2 and 6.3). The cost of
storing and transmitting these samples grows proportionally to their
number, so in that sense oversampling a signal is wasteful. Sampling at
the Nyquist rate w, = 2W 9, lies on the border between undersampling
and oversampling, and is often referred to as critical sampling. Note that
when critical sampling is used the spectral copies of F(w) have no gaps
between them, and an ideal lowpass filter must be used (because of its
zero-width transition band) to recover the original signal f(t).
Finally, it is worth explaining the significance of the term “aliasing.”
When sampling sinusoids at higher than the Nyquist rate there is never any
ambiguity as to the frequency of the original waveform based on its sample
values: the sinusoid may be uniquely and exactly reconstructed from its
samples. But when a sinusoid is sampled below the Nyquist rate we can
no longer be exactly certain as to its original frequency. This is
demonstrated in Fig. 6.7: two sinusoids at different frequencies have the
same sample values. One of these sinusoids is undersampled and the other
is properly sampled:
J
Figure 6.7. Sine waves at different frequencies can give identical samples if at least one of
them is undersampled.
The frequency domain explains this phenomenon best. Figure 6.8
shows: (a) the spectrum of x(t) = cos(3t) after sampling at w, = 8
rad/sec (oversampling), and (b) the spectrum of y(t) = cos(St) after
» Except in special circumstances, which we discuss later in Section 6.1.6.230 Practical Signal Processing and its Applications
sampling at w, = 8 rad/sec (undersampling). The two spectra are
identical! Due to aliasing, which means that because of sampling there
was overlapping of adjacent frequency-shifted copies of Y(w) to give
Y;(w), the illusion of a 3 rad/sec sinusoid was created even though the
original frequency was 5 rad/sec. The sinusoid at frequency 5 takes on the
alias of being at 3, and there is no way to distinguish between the two
cases based on their sample values (or spectra).
(a) Spectrum of x,(t) = cos(3t) Szn/a(t):
spectrum copy at w = -8 spectrum copy at w = 8
\ \ J I
i 1 i 1 i 1 i 1
jE) iL et i
=16 -8 30 3 «8 16
spectrum copy at spectrum copy at —_—_spectrum copy at
w=-16 o=0 o=16
(b) Spectrum of x,(t) = cos(St) 5an/9(t):
spectrum copy at a= =-8 Spectrum copy at w = 8 i
f 1 C 1 f 1 1
Tabhatladtylh yl.
aS
spectrum copy at spectrum copy at spectrum copy at
-16 w=0 w=16
(original spectrum)
Figure 6.8. Identical spectra result when cos(3¢) and cos(5t) are sampled at w, = 8
rad/sec (T, = 21/8 sec), which demonstrates “aliasing,”
6.1.3.3 Time domain perspective
It is also instructive to analyze signal sampling and reconstruction in the
time domain. Take, for example, the case of sampling a pure sinusoid.Sampling Theory and Practice 231
How many samples per period correspond to the Nyquist rate? The answer
is 2 samples/period: Tp/T; = Ws/W9 = 2Wo/Wo = 2.' Another time-
domain item of interest is the impulse response of the reconstruction filter
that is required to reconstruct an analog signal from its samples. That is
the topic of the following paragraphs.
Assume we are sampling signal x(t) at exactly the Nyquist rate w,.
That is, the sampling frequency is twice the highest frequency component
in X(w). Our mathematical model for the result of sampling is the impulse
train signal x,(t). To recover x(t) from x,(t) we pass the latter through
an ideal lowpass filter; this LPF must have bandwidth w,/2 (as is
customary, we measure bandwidth over positive frequencies only),
passband gain T;, and stopband gain 0. The impulse response of this ideal
lowpass reconstruction filter is therefore
A(t) = sinc (2) © T, rect (3) = H(@). (6.9)
The reconstructed signal in the frequency domain is determined by the
multiplicative product X(w) = X,(w)H(w), and the same information in
the time domain may be calculated using the convolution product x(t) =
X(t) * h(t) = x,(t) * sinc(w,t/2). Expressing x,(t) = x(t)d,(¢), an
expression for the reconstructed signal x(t) in terms of its own samples
is:
x(t) = x(t)57, (6) * sinc (24).
' When sinusoid A cos(«ot + 8) is sampled at the Nyquist rate of 2 samples per period
(w, = 2), there is still some ambiguity that prevents us from reconstructing the original
waveform from its samples. All samples will have the same magnitude (in the range
between 0 and A, depending on the value of 8) and toggle in polarity from one sample to
the next. In either case, there are infinitely many combinations of {4,6} that could have
produced such results. Although this dilemma occurs anytime we sample a signal whose
spectrum has an impulse at the highest frequency component, it is of theoretical interest
only. In practice one would sample the signal above the Nyquist rate to simplify
reconstruction lowpass filtering.232 Practical Signal Processing and its Applications
: t
= (LR x(nT;)6(t — nT;)} * sinc (24)
= De x(nT,)sinc (2S). (6.10)
This equation describes the sum of infinitely many sinc functions
delayed by integer multiples of T; sec, scaled by the sampled values of
x(t) at those delay times, and added together to reconstruct the original
waveform x(t). The sinc functions may be thought of as building-block
waveforms whose weighted sum is the original signal (the weights being
samples of the original signal). In the case described here where Nyquist
sampling is used, each of the sinc functions used to reconstruct x(t) has
zero-crossings at every sampling time except for the one where it is
located. For this reason, as shown in Fig. 6.9: only one sinc function
determines the value of x(t) at t = kT,, but infinitely many sinc functions
add together to determine the value of x(t) at t # kT,/
-4%, -37, -27, —T 0 T, OT oT ar,
Figure 6.9. Reconstructing x(t) as the sum of weighted sine functions.
If we were sampling above the Nyquist rate and reconstructing with an
ideal LPF then the zero-crossings of the sinc pulses would not line up as
4 Stated another way: in this example, to exactly recover x(t) at # nT, we need the
knowledge of all sample values ~ both forward and backward in time.Sampling Theory and Practice 233
nicely with the sample times. In any case, it is important to note that
between one and infinitely many samples are needed to reconstruct the
waveform at any time.
There are some aspects of the previous discussion that are of academic
interest only. For example, the ideal lowpass cannot be practically
implemented because it is not causal (sinc # 0 Vt < 0). Also, because the
ideal lowpass filter has a rect pulse as its frequency response, and this rect
pulse has no transition band between passband and stopband, realizing it
using analog circuitry would require infinite cost.* One way to appro-
ximate the frequency-domain filtering characteristics of an ideal lowpass
filter response for accurately recovering a signal from its samples is to use
a filter having the following impulse response:
Aypp(t) = sinc (es) rect FS
The ideal hy pp (t) and its approximation h,p,(t) in Figs. 6.10 and 6.11
clearly show us that the latter is a truncated and delayed version of the
former. Also shown are their magnitude frequency responses. Truncating
the sinc pulse at a zero crossing by forcing the rectangular pulse width to
be an integer multiple of T, helps minimize spectral distortion due to the
truncation.! The kT, sec delay was introduced to make A, pp(t) causal and
therefore possible to realize; delay has no effect on magnitude frequency
response. ™
(kEZ). (6.11)
* A rational H(s) would need to have infinite order (Ch. 8).
' Setting a waveform to zero outside of some range by multiplying it with a finite-width
waveform is called windowing. Here the rect pulse is called the time window function.
™ Although introducing extra delay is undesirable in some applications, e.g., two-way
communications.234 Practical Signal Processing and its Applications
>t 0 -w
0 Omar 0 Omax
Figure 6.10. Ideal LPF impulse response (sinc), and its magnitude spectrum (rect).
0 ——nl
ot 0 t >w
0 max 0 max
Figure 6.11, Non-ideal LPF impulse response (truncated, delayed sinc), and its magnitude
spectrum (rect pulse with overshoot).
6.1.4 Oversampling to simplify reconstruction filtering
We have seen that Nyquist-rate sampling requires an ideal lowpass filter
to separate the baseband image of the original analog signal spectrum from
its copies. Because a lowpass filter cannot be realized in practice, what is
to be done? If we insist on sampling at the Nyquist rate, which is the
lowest possible frequency for which aliasing does not occur and the
original spectral copies have no gaps between them, then reconstruction
lowpass filtering using a practical filter will usually have two
consequences: (1) the filter will attenuate some desired signal components
at the high-frequency end of the passband, and (2) the filter will not fully
eliminate some undesired adjacent spectrum components at the low-
frequency end of the stopband. This situation is depicted in Fig. 6.12.Sampling Theory and Practice 235
H(@), Feo):
-20; Ws 0 ws 2ws
H(@) x E(w):
max
Figure 6.12. The spectral consequences of reconstructing a Nyquist-rate-sampled signal
using a non-ideal lowpass filter.
To avoid the distortion due to imperfect filtering as shown in the
previous figure, there is a simple solution: sample the signal above the
Nyquist rate. This situation was previously depicted in Fig. 6.4. By
putting a gap between adjacent spectra, we make room for the non-ideal
lowpass filter transition band.
6.1.5 Eliminating aliasing distortion
As defined previously, aliasing distortion is the overlapping of adjacent
spectral copies when a continuous-time signal is undersampled (w, <
2@max)- Typically, this distortion irrecoverably corrupts the baseband
spectrum so that it is impossible to recover the original signal from its
samples.
When the signal energy at w,/2 or above is negligible, then in some
applications the aliased components may not cause too much trouble. For
example, human speech signals have nearly all of their energy below 10*
Hz (w = 21: 10*rad/sec) during voiced (vowel) sounds, so choosing
@, = 2(27 - 10*rad/sec) results in very little aliasing. Higher frequency
energy does occur during unvoiced speech such as the “sh” sound;
however, even when aliasing occurs it is not very objectionable due to the236 Practical Signal Processing and its Applications
random nature of the waveform and how it is perceived by a human
listener."
The best way to avoid aliasing distortion is to sample at or above the
Nyquist rate (twice the highest frequency component in the signal),
although this comes at the price of having to deal with more samples per
second.
6.1.5.1 Anti-alias post-filtering
One way of getting rid of aliased frequency components is to filter them
out during the reconstruction filtering step. This is shown in Fig. 6.13 for
the case that an ideal lowpass filter is used. When the original signal has
highest frequency component Wa, and is sampled at w,, then the aliased
components will fall as low as @;— Wmax in frequency. So, if it is
necessary to undersample the original signal that results in aliasing, aliased
F,(w) (when aliasing had occurred):
| rw
-2w, —Ws 0 Ws 2s
max @max
F,(w) after ideal anti-alias
post-filtering
rw
~2ay — es | O | te 2a,
“Ws + Wmax Os — Omax
Figure 6.13. Post-filtering to remove aliasing distortion
” We must be careful, though, since aliasing can bring high-energy sounds at frequencies
above the human hearing range into a frequency range that is audibleSampling Theory and Practice 237
components can be filtered out when reconstructing the signal from its
samples using an ideal lowpass filter having bandwidth w; — ®max
instead of the usual @ gx. This blocks the aliased frequency components,
but it also blocks the original signal energy in the frequency range
[@, — ®max» max]. If the reconstruction lowpass filter used is not ideal
and has a wide transition band, then this method loses its effectiveness.
6.1.5.2 Anti-alias pre-filtering
A better way to deal with aliasing is to eliminate it before it occurs. That
is, before sampling a signal at frequency w, the signal is first lowpass
filtered to have a highest frequency component of w,/2. This eliminates
some of the original signal frequencies, but not to the extent as in the
previously mentioned post-filtering technique since w,/2 > Ws — Wmax
Original signal spectrum after
anti-alias lowpass filtering:
rw
ee ee ws 2ws
“Omar max
F,(w) (sampling at Nyquist rate)
wo
—2w, —W, 0 @s 20,
After reconstruction
lowpass filtering
>w
-20; | a 2w,
max
—ws/2— ws/2
Figure 6.14. Pre-filtering to prevent aliasing distortion.238 Practical Signal Processing and its Applications
when undersampling.® Figure 6.14 demonstrates the effects of anti-
aliasing pre-filtering. The pre-filtering technique comes at the cost of
requiring that two lowpass filters are used in the sampling/reconstruction
process instead of just one.
6.1.6 Sampling bandpass signals
When a signal is bandpass in nature (only contains frequency components
over some narrow frequency range [w,,@2]) then it is possible to
undersample it and still recover the signal from its samples (using a band-
pass filter instead of a lowpass filter), as shown in Fig. 6.15. This is
relevant for software-defined radios where the goal is to have a minimum
Original bandpass signal:
a Omar
Lo Ro
—2was 1. Ws 0S; 0 O5w, ws 150, 20; 2.5w;
o
‘Spectrum of bandpass signal after sampling below the Nyquist rate («5 < 20nax ):
“Ades “eT ATS Ag RN TN”,
—2a, -1.50, —w, —050, 0 050, W, 150, 20, 250,
Reconstruction bandpass filter frequency response:
>w
-20, 1.50, —w, 050, 0 05, w, 150, 20, 250,
Spectrum of the recovered bandpass signal:
Lo Ro
7 >o
—2w, 150, —0, 0.50, 0 050, @, 150, 20, 250,
Figure 6.15. An example of undersampling a bandpass signal without destructive overlap-
adding of adjacent spectral copies.
© When undersampling: @ < 2Wmax, OF 5/2 <@max- Therefore -w;/2 > —Wmax,
which gives w./2 > Ws — Wmax after w, is added to both sides of the inequality.Sampling Theory and Practice 239
of analog circuitry prior to sampling a high-frequency bandpass radio
signal, so that radio functionality may easily be adapted to signal and
interference conditions by executing one of many different software
algorithms. In this scheme, because no destructive overlapping of the
frequency-shifted spectra occurs, it is possible to recover a message signal
from the sampled bandpass signal using digital signal processing algo-
rithms.”
6.1.7 Approximate reconstruction of a continuous-time signal
Srom its samples
To this point we mathematically modeled the process of sampling and
reconstructing a continuous-time signal using an impulse train function.
Impulse trains do not exist in nature, so in this section we describe more
realistic alternative methods.
6.1.7.1 Zero-order hold method
Although a Dirac impulse function cannot exist in the real world’ its
convolution product with another signal can exist.' Consider the wave-
form shown in Fig. 6.16. Shown are the original analog signal x(t), and
the result of passing x,(t) = x(0)5r, (t) through what is called a “zero-
order hold” filter to produce a step-like waveform. The zero-order hold
filter gets its name because it replaces each impulse at the input with a
P In this example, even though we undersample the signal (the Nyquist criterion w, >
2cmax is not satisfied) and adjacent spectra overlap one another to produce “aliasing”, the
overlapping spectral copies fit into gaps that are inherent to the bandpass nature of the
signal. Thus, there is no destructive overlap-adding of spectra. Care must be taken to make
sure that this indeed is the case when sampling bandpass signals. For example, if the
bandpass signal has bandwidth BW = @max—@min then the theoretically minimum
possible sampling rate «w, that may work in some cases (depending on the values of @min
and Wmax) is 2* BW. This is the case in Fig. 6.15.
4 Instantaneously switching amplitude from zero to infinity, and then instantaneously
switching it back to zero, requires an infinite amount of energy to accomplish.
" For example: if x(t) exists then x(t) * 6(t) also exists.240 Practical Signal Processing and its Applications
pulse whose height is a zero-order polynomial (a constant value) at the
output. It can be said that the output of this filter is equal to the area of the
nearest impulse in the input, which is the value of x(t) at the nearest
sampling time. Therefore, the impulse response of a zero-order hold filter
is rect(t/T,) and its output signal is equal to rect(t/T,) convolved with
x(t). Importantly, the step-like waveform at the zero-hold filter output
can be produced with a high degree of accuracy in the real world, and the
accuracy of this approximation to x(t) improves as T, decreases (the step
widths get smaller). Thus, we can represent the sampled values of x(t)
using a waveform that does not include any impulses. In practice the step-
-47, -3T, -27% -7, 0 % 2% 3% 47°
Figure 6.16. Reconstructing x(t) as the sum of weighted rect functions: (a) x,(t) «
rect(t/T;) ; (b) x5(t) * rect((t — T;/2)/T;).
5 This requires a non-causal filter because the nearest impulse may be found almost T,/2
sec into the future. However, a causal filter results when the output signal is delayed by
T,/2 see.Sampling Theory and Practice 241
like waveform is easily created by sampling x(t) using a sample-and-hold
circuit, as shown in Fig. 6.17.
i,
Vin = x(t) Vout = x(2T5)
Figure 6.17. Sample-and-hold circuit to obtain x,(t) * rect((t — T,/2)/T,) from x(t).
Assume that the switch closes at time t = nT; sec to instantaneously
charge the capacitor to voltage x(nT,), and then immediately opens so that
the capacitor holds that charge for T; sec. The resulting output waveform
will appear as shown in Fig. 6.16(b). Note that this is the same waveform
as results from passing the impulse train sampled signal x,(t) through a
zero-order hold filter that includes T,/2 sec delay. Using this simple
circuit, we have achieved a sampling of x(t) to give the convolution
product
x5(t) * rect (#4 wie)
= x(t)dr, (0) * rect((t — T;/2)/Ts) * x(t). (6.12)
Let’s compare the magnitude spectrum of x,(t) to the magnitude
spectrum of rect(t/T,) * x(t). (We are ignoring the T,/2 sec delay,
because delay affects phase but not magnitude frequency response.) Since
F{rect(t/T;)} = T,sinc(wT;/2), and because convolution in time is
multiplication in frequency, we obtain the following result:
F{rect(t/T,) + x,(t)} = T,sinc(wT,/2) X;(w). (6.13)
Thus, the only difference between spectrum X,(w) and that at the
output of a zero-hold filter is a multiplication by waveform T,sinc(wT,/2)
in the latter. Figure 6.18 shows three spectra: (a) |X;(w)|, (b)
IT,sinc(wT,/2)|, and (c) |T,sinc(wT,/2) X,(w)|. In the plot of (c) we see
that the baseband image of X(w) is distorted because of multiplication by242 Practical Signal Processing and its Applications
IX,(w)} (X(w) after Nyquist rate sampling):
(a)
0 > w/o,
EO see
() [Pysine(coPs/2)|:
Ideal
4 Teconstruction
Ss 3 2 1 0 1 3 3S 4 5h eles
() [X(@)T,sinc(wT/2)|;:
ea gu nema agg gn
Figure 6.18. (a) Original spectrum, (b) spectral distortion due to sample/hold process, and
(c) the resulting product of these: |T,sinc(wT;/2) X.(w)].
the sinc function. Fortunately, we can compensate for this distortion of
the baseband spectrum by post-multiplying the spectrum with inverse sinc
function 1/(T,sinc(wT,/2)), which is finite at all frequencies that are not
exact integer multiples of w,. Then, an ideal lowpass filter will be able to
exactly recover the original baseband spectrum X(w).
6.1.7.2. First-order hold method
In the previous section, we learned that although it is practically impossi-
ble to obtain sampled signal x,(t) = x(t)d7,(t), it is simple to obtainSampling Theory and Practice 243
rect((t — T,/2)/T,) * x,(t) using a sample-an-hold analog circuit. The
distortion produced by this convolution can be compensated for using
inverse-sine filtering! The sample-and-hold unit output signal, which is
the same as passing x,(t) though a zero-hold filter, is characterized by its
amplitudes between any two sampling times: they are polynomials of the
0" order (constants). Alternatively, we may convolve x,(t) with a
triangular pulse: y(t) = x,(t) + A(t/2T,). As shown in Fig. 6.19, this is
the same as “connecting the dots” between adjacent sampled values of
x(t). The resulting waveform y(t) is composed of first-order polynomials
(sloped line segments) between sample points, and thus this convolution
is named “first-order hold” filtering. The name is a bit of a misnomer, as
the waveform is not “held” at a constant level while waiting for the next
sample time; instead, the waveform anticipates the arrival of the next
sample amplitude by linearly heading toward that value.”
xe) + A(t/2T,)
[
cy
a
7
x(0)
-4%, -3%, -2% -T% 0 fe rear,
>t
Figure 6.19. Reconstructing x(t) as the sum of weighted triangular pulse functions
(first-order hold filter)
Finally, we consider the frequency-domain effects of first-order hold
filtering. Since A(t/2T,) € T,sinc?(wT,/2), the output spectrum is:
‘Of course, we cannot compensate for the T, /2 sec delay since advance (opposite of delay)
units do not exist.
"To be implemented as a causal filter, the first-order hold impulse response needs to be
delayed by T; seconds.244 Practical Signal Processing and its Applications
Y(w) = F{A(t/2T,) * x,(t)} = T,sinc?(wT,/2) X,(w). (6.14)
The effects of first-order hold filtering on the reconstructed signal
spectrum are demonstrated next in Fig. 6.20.
(@) 1X,(w)]_ (X(w) after Nyquist rate samplin;
EE EE, RE eT ET sels
{7,sine? (wT,/2)|:
Ideal
reconstruction
eX)! (desired)
| ts
5
Figure 6.20. (a) Original spectrum, (b) spectral distortion due to 1*-order hold process, and
(c) the resulting product of these: |T;sinc?(wT/2) X5(w)
A causal version of the first-order hold filter may be simulated using a
programmable-slope ramp generator (integrator circuit) to produce a
piecewise linear waveform in response to input signal x(t).Sampling Theory and Practice 245
6.1.8 Digital-to-analog conversion
The process of converting a binary code representing a sampled amplitude
value to a corresponding voltage is termed “digital-to-analog conversion”
(DAC). An N-bit digital-to-analog converter is a hybrid of digital and
analog circuitry that performs the calculation
x = constant x YN=32"bp, (6.15)
where b,, is a bit in the binary codeword representing a sampled value
whose amplitude is x volts. Note that bo is the least-significant bit (carries
the lowest weight) and by_, is the most-significant bit (carries the highest
weight). The number of different voltage levels that this circuit can
produce is equal to the number of permutations of N bits: 2. So, an 8-bit
DAC can produce 256 different output voltages while a 16-bit DAC can
produce 65,536. For every extra bit in the binary code the number of
possible output amplitude levels doubles.
ae N-bit :
Single-wir
N digital ingle-wire
>
inputs : DAC analog output,
Figure 6.21. Circuit diagram symbol for a digital-to-analog converter (DAC).
6.1.9 Analog-to-digital conversion
The process of quantizing a voltage and assigning to it a binary code is
termed “analog-to-digital conversion”, or ADC for short. An N-bit
analog-to-digital converter accepts a single analog signal and outputs N
digital signals (one representing each bit in the binary code assigned to the
quantized amplitude level). As with the DAC, an ADC utilizes a hybrid
of both analog and digital circuitry and is fabricated on a single integrated
circuit. The circuits needed to accomplish analog-to-digital conversion
combine sampling, quantization (described in the following section) and
binary coding; they are more complicated than digital-to-analog246 Practical Signal Processing and its Applications
converters designed for the equivalent number of amplitude levels. In fact,
some ADC’s have a DAC as one of their internal subsystems!
Single-wire
analog input
N digital
outputs
Figure 6.22. Circuit diagram symbol for the analog-to-digital converter (ADC).
6.1.10 Amplitude quantization
In this section, we briefly discuss the technique and effects of amplitude
quantization, which makes it possible to represent each of the samples of
a signal using a finite-length code. This operation is normally bundled
together with sampling in an analog-to-digital converter. When describing
the probabilistic aspects of amplitude quantization in this section we
assume the reader has some knowledge of probability theory.
6.1.10.1 Definition
Amplitude quantization, normally referred to as simply “quantization”, is
the act of mapping a continuously-distributed input value to one in a
discrete set of possible output values. If this operation is performed on a
scalar, it is called scalar quantization. If performed on a group of scalar
values (called a vector), it is called vector quantization. For example:
given a scalar amplitude value that is known to be anywhere in the range
0-5 volts, we may round it to the nearest tenth of a volt to produce an
output value. This rounding to the nearest available output level is a type
of scalar quantization. Other, more complicated ways may be used to
choose which output level is selected based on the input signal vs. time.
For example, when dealing with audio signals, a perceptually-based
criterion is often used instead of rounding-to-nearest.Sampling Theory and Practice 247
A system that performs amplitude quantization on its input signal to
instantaneously produce the resulting output signal is called a quantizer.
Figure 6.23 shows a possible input-output relation for a quantizer. The
quantizer characteristic shown is called uniform quantization because of
the uniformly-spaced output levels:
Xq = quantized x value
A
5 4 +3 2 -- 0 1 2 3 4 5
Figure 6.23, Input-output description of a uniform quantizer that rounds input value x to
the nearest integer value.
Even though we model a quantizer as a system having an input and
output, measuring its impulse response is meaningless because this system
is nonlinear. The many-to-one mapping of amplitude values caused by
passing a signal through a quantizer (or quantizing that signal) results in
an irreversible loss of information. However, this behavior is predictable
and it is also controllable: by designing a quantizer to have many closely-
spaced output levels, its action will not change the input waveform very
much.248 Practical Signal Processing and its Applications
6.1.10.2 Why quantize?
Quantization would not be used if not for that, in practice, a finite number
of quantizer output levels are produced. Returning to the previous
example of rounding a voltage in the range 0-5 volts to the nearest tenth
of a volt: the number of input signal amplitude possibilities is infinite (a
count of real numbers between 0 and 5), whereas the number of possible
output amplitude levels is 51 {0.0, 0.1, 0.2, ...,4.9,5.0}. Because only a
finite number of output signal levels can occur, it becomes possible for us
to represent each of the output levels using a finite-length code." Then we
may store a code for each quantized signal sample using a finite number
of bits in memory, or send that code over a communications channel in a
finite time period. Without quantization, the infinite precision of signal
amplitude values would require infinite storage space and infinite trans-
mission time.
The binary code is almost universally used to represent quantizer
outputs. If each level is represented by its own code,” then an N-bit binary
code will uniquely specify up to 2" different output levels. With every
extra bit in the code the possible number of quantizer output levels that
can be uniquely represented will double, and (if uniformly distributed) the
distance between quantizer output levels will be cut in half. Because of
this exponential relationship between number of code bits and number of
quantizer levels, it is possible to reduce the quantization distortion to an
acceptable level and still have reasonably short binary code words. Next,
we examine a quantitative measure of the distortion caused by rounding-
to-nearest quantizers.
" For example, we could use a two-digit decimal code (00, 01, ... ,49, 50) or a 6-bit binary
code (000000 thru 110010) to represent each of the 51 outcomes.
™ This is not always the case. For example, pairs or triplets of output levels may be
represented using a single binary code word. Such grouping of quantizer outputs prior to
binary code assignment is common when variable-length codes are used, taking advantage
of the fact that combinations do not occur with equal probability (e.g., Huffman codes),Sampling Theory and Practice 249
6.1.10.3 Signal to quantization noise power ratio (SNR)
Consider a scalar quantizer that instantaneously rounds the input
amplitude value x to the nearest one of L available output levels, producing
output amplitude value xg. Assume that these L output levels are
uniformly distributed over range [—Xmax,+%max], and that the input
amplitude value x is equally likely to be anywhere in this range. Using
the tools of probability theory, we will express the input to the quantizer
as a sample of random variable X, where its probability density function
(PDF) is:
1 3
fa@) = =k rect (4) (6.16)
Sx):
a
2Xmax
1
I 2 mar
" >
q —Xmax 0 Xmax z
fieg(Xq): Ptmacf = 4
- ~ a 1
a
: >
q 4 0 4 .
2 2
Figure 6.24. Probability density functions of input and output signals to a uniform
quantizer, as described above.
Chopping the quantizer range into L contiguous intervals of width A =
2Xmax/L, and placing an output amplitude level in the middle of each
interval will give the result that each amplitude level is selected with equal
probability (=1/L). This is due to the input signal amplitude distribution250 Practical Signal Processing and its Applications
(uniform over the quantizer input range) in combination with the specific
placement of quantizer output levels. Let random variable Xq describe the
quantizer output signal. Figure 6.24 shows the probability density
functions of random variables X and Xq.
Define quantization error q = Xq — x. This error is probabilistically
described by random variable Q = X, — X. The amplitude range of quan-
tization error will be [-%max/L,+%max/L] = [-A/2,+A/2] since the
input signal amplitude is never more than half-a-step-size A away from the
nearest quantizer output level to which it is rounded. This means that the
probability density function of the random variable Q is uniform between
—A/2 and +4/2, which is the same as the PDF of X except that the
amplitude range of Q is compressed by factor L.
In an electronic circuit, when a signal amplitude of v volts (i amperes)
is applied across (passed through) an r 2 resistor, the corresponding power
dissipated as heat in the resistor will be v?/r (i771) watts. For this reason,
we often refer to mean-squared value of random variable X, when
measured probabilistically as E[X?], as the power of X (within a constant).
For uniform PDF’s for continuous random variables having zero mean
value, as is the case for random variables X and Q above, it may be shown
that their powers are proportional to squares of amplitude spans: (span)?
/12. Therefore, the power of signal X is (2Xmax)7/12 = x2,qx/3 and the
power of quantizer error signal Q is ((2Xmax)/L)?/12 = xhrax/3L?.
Since the quantizer output signal is x = x + q, we may draw a diagram
that models the quantizer as a system that adds quantization “noise” q to
the input amplitude value x as shown in Fig. 6.25:
Figure 6.25. Noise-additive model for a quantizer.Sampling Theory and Practice 251
A measure of quantizer distortion is commonly chosen to be signal
power divided by noise power at the output. This measure, commonly
used in electrical engineering, is known as Signal-to-Noise Ratio (SNR).
In this application, it is known as SNRg — the ratio of signal power to
quantization noise power. In the example discussed above where a signal
is uniformly distributed over amplitude range [—Xmax,+Xmax], and is
matched by the range ofa uniform L-level quantizer to which it is inputted,
the value of SNRg is:
= FLX?) _ _xhnax/3_ _ 2
SNRo = Figz] = ease (6.17)
If, as is usually the case, the quantizer output is represented with a
fixed-length binary code of length N bits, then the number of quantizer
output levels L = 2". Substituting, the expression for SNRg becomes
22" When stated using a dB scale,
SNRg (4B) = 10logy9(2?") = 20N logy9(2)
= 20N (0.301) = 6.02N
= 6N dB. (6.18)
(uniformly-distributed signal)
The 6 dB per quantized signal code bit rule-of-thumb only holds for
signals whose amplitudes are uniformly distributed over the entire uniform
quantizer range. Let’s look at some simple examples of signal waveforms
having non-uniformly distributed amplitudes. Consider a sinusoid whose
amplitude spans the quantizer range —Xmax tO +Xmax. A sinusoid having
amplitude C has power value C?/2 regardless of its frequency or phase
shift; therefore, the quantizer input signal power is x2,q,/2. After uniform
quantization with step size A = Xmqx/L, the error due to rounding has the
same power as for any other input signal *; this gives a quantization error
* We assume that the input signal amplitudes are continuously distributed so that the
quantizer error signal still has a uniform distribution over [—A/2,+A/2]. Without this
assumption, it is even possible for quantization error to equal zero (on the second pass of a
signal through the same rounding-to-nearest quantizer).