Digital Signal Processing Basics
Digital Signal Processing Basics
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
737
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
Time series Continuous string of sample values. dependent (e.g., amplitude) axes. This science goes back
Window Process of isolating and modifying data to the to the very early days of electronics, when now-primitive
exclusion of all others. vacuum tube amplifiers, separately packaged capacitors,
z-Transform Mathematical method of analyzing and rep- inductors, resistors, and the like were used to filter
resenting discrete signals. signals on the basis of their frequency content. These
analog systems were often modeled in terms of ordinary
linear differential equations. During the 1940s it became
DIGITAL SIGNAL PROCESSING (DSP) is a rela- popular to study such systems with a differential equation
tively new technical field that is concerned with the study analysis tool called the Laplace Transform. Using these
of systems and signals with respect to the constraints and classical analog methods, a number of important filter
attributes imposed on them by digital computing machin- forms were developed. Those that have had a major
ery. It differs from other signal processing technologies— impact on the design of digital filters have been the
namely, analog and discrete—in terms of signal definition, maximally flat (Butterworth) filter and the equal-ripple
computational procedures, and performance limitations. filters referred to as Chebyshev and elliptic filters. In the
Because of the organization of a digital computer and the late 1940s and 1950s a new dimension in analog signal
finiteness of digital calculations, the field of DSP has de- processing evolved, which was called sampled data. A
veloped a unique set of analysis and synthesis procedures. sampled data signal would have a graph whose dependent
A DSP system will accept a string of digitally encoded axis is continuously defined but whose independent axis is
samples, called a digital time series (or time series), and discrete. This is the result of sampling a continuous signal,
modify, manipulate, classify, or quantify them using DSP say x(t), at periodic intervals of time and then saving the
algorithms. The tools used to implement these DSP algo- resulting sampled values. For example, if a ±10-V signal
rithms are digital computer software, hardware, or their is sampled at the sample rate of 1000 times per second,
mix. The digitally processed data can then be used or an- the resulting discrete signal could look like x(n) =
alyzed by human or machines. Digital signal processing {. . . , +1.23334, −0.342344, +7.24324, . . .}, where the
has become an essential element in the study of commu- three displayed real sample values represent the actual
nication, control, medicine, speech, vision, radar, sonar value of x(t) at times t = (k − 1)ts , kts , (k + 1)ts where
systems, plus a host of other applications where the high ts = 1/1000 = 10−3 see. These early discrete systems
speed and high precision of digital computers can be ap- were used in low-frequency control applications, such
plied to signal and system synthesis or analysis. as autopilots, where conventional analog filters were too
bulky, unreliable, and expensive to be considered viable
design options. Mathematically, the designers of discrete
I. ORIGINS OF SIGNAL PROCESSING systems used essentially those techniques developed
for linear continuous system analysis by modifying the
The foundations of DSP are interwoven with the origins of Laplace transform to a form we now call the z-transform.
its counterparts, analog signal processing and algorithms. During the late 1940s another major milestone was
An algorithm is a computing procedure that is generally achieved. Claude Shannon, of Bell Laboratories, and his
well suited for execution on a digital computer. The theory associates noticed that if a signal s(t) contains no fre-
of algorithms can be traced back to the seventeenth- and quency components at or above f N Hz [s(t) is then said
eighteenth-century work of the celebrated mathematicians to be bandlimited to f N Hz], then s(t) can be completely
Sir Isaac Newton and Karl Freidrich Gauss. An impor- reconstructed (interpolated) from its sample values pro-
tant signal processing tool now called the discrete Fourier vided the samples are taken at a rate equal to or in excess
transform (DFT) may have been derived by Gauss as an of f s = 2 f N samples per second. This condition is known
algorithm as early as 1805. This would predate Fourier’s as the Nyquist sampling the orem, f s is referred to as the
discovery in 1807 of the infinite harmonic series, which Nyquist sample frequency, and f N is sometimes called the
was published in 1822. In fact, many early digital signal Nyquist frequency. In the time domain, the samples must
processing successes were simply classic numerical anal- be spaced no further apart than ts = 1/ f s sec, where ts is
ysis algorithms programmed for execution on a general- the sample period. If a bandlimited signal is sampled at
purpose digital computer, but modified to exhibit special a rate slower than the Nyquist sample frequency, a type
signal processing attributes. of error called aliasing can occur. As the name suggests,
The other foundation on which digital signal processing an aliasing signal impersonates another signal in such a
is built is its predecessor, analog signal processing. An manner that they cannot be discriminated from each other.
analog signal would have a graph that is continuously This problem is suggested in Fig. 1. For example, if two
defined along both the independent (e.g., time) and signals s1 (t) = 1 and s2 (t) = cos(2π t) are sampled once
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
FIGURE 1 Example of aliasing: graphical interpretation of spectral isolation and overlap as a function of sample
rate. (a) Signal spectra. (b) Spectra for sampling beyond the Nyquist sampling rate; f s > 2f m (f m is the maximum
frequency). (c) Spectra for sampling at the Nyquist rate (f s = 2f m ). (d) Spectra for sampling below the Nyquist rate
(f s < 2f m ).
per second, their resulting sample series would both be machines became available, there was immediate interest
given by {1, 1, 1, . . .}. The reason for this undesirable cir- in implementing a host of computing algorithms (some
cumstance is that s2 (t) is bandlimited to 1 Hz and therefore dating from the seventeenth and eighteenth centuries).
must be sampled at a rate in excess of two samples per sec- These early attempts were, in a large part, extensions to
ond. Once properly applied, the sampling theorem had an the then maturing field of numerical analysis. Because
immediate impact on the fields of communication, control, the digital computer was arithmetically fast and could be
as it later did on the new field of digital signal processing. programmed to iteratively solve a complex problem, com-
puting algorithms were extensively used to statistically
analyze and mathematically manipulate signals. During
II. ORIGINS OF DIGITAL the 1960s the race into space accelerated this type of signal
SIGNAL PROCESSING processing and gave rise to a host of estimation, smooth-
ing, and prediction algorithms for navigation, control, and
The study of computing algorithms predates the advent space communication applications. The fields of pattern
of the digital computer. However, once digital computing recognition and statistical hypothesis testing (e.g., radar
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
signal detection) were also greatly enriched with the avail- as the microprocessor have created the new field known
ability of general-purpose computers. as digital signal processing.
During the transition period, ranging through much of From a theoretical viewpoint, digital filters provide a
the 1960s, the general-purpose digital computer was too rich field for study. Because of their robust theoretical
expensive and unreliable to be used as an on-line signal base, one would expect more advanced digital signal pro-
processing tool. But that began to change. In the mid- cessing systems to be developed in the future that are based
1960s the first DSP milestones were achieved in the form on mathematical abstractions. The level of mathemati-
of fast transforms and filters. One of the attempts to use cal consciousness in digital processing is attributable to
the computer as a signal processing tool was in the area the nature of the hardware in which these systems reside.
of spectral analysis using Fourier transform and its vari- Whereas analog systems are defined over a real coefficient
ations. A discrete Fourier transform accepts as input a field, the data and coefficients associated with a digital fil-
set of discrete samples in the time domain (i.e., a time ter are defined with finite precision and stored in registers
series) and transforms them into information in the fre- of finite word length. The levels of mathematical skill re-
quency domain. Unfortunately, because of its high mul- quired to work algebraically in this area are more advanced
tiplication budget, the DFT formula was not amenable to than those required in the field of real numbers.
efficient digital computer data processing. It was Cooley The advantages of digital filters are numerous:
and Tuckey who popularized the so-called fast Fourier
transform (FFT), which revolutionized the field of the 1. They can be fabricated with available high-density,
DFT. The FFT achieved two milestones. First, it provided low-cost digital hardware. This should continue to
a service—namely, faster spectral analysis. Second, it was improve as digital signal processing assimilates the
designed with an awareness of the strengths and limita- continuing advance in the digital electronics industry.
tions of digital computers—namely, memory utilization 2. Certain classes of digital filters have guaranteed
and data flow. Because of this, the FFT altered the at- stability.
titude of digital signal processing engineers and scien- 3. They are free of the impedance-matching problems
tists, who charted a new direction thereafter. Refinements associated with analog filters.
and design procedures that optimize computer execution 4. Digital filters can work at extremely low frequencies
rates and resource utilization were sought. Digital com- that cannot be supported by analog filters.
puter architectures have also been developed that reflect 5. Digital filters are programmable if realized with a
the requirement of DSP algorithms. Finally, many other programmable processor.
fast digital algorithms and transforms have since been 6. They efficiently support computer-aided data
discovered. analysis [simple input–output (I/O) logging and
The other major branch of DSP theory has concerned transfer] and can often be interfaced to the power
digital filters, which can replace many of the analog elec- supplies and mechanical structure of existing digital
tronic filters that have been in daily use for over half a processors.
century. Passive analog filters are designed with resistors 7. They can work over a wide range of critical
(R), capacitors (C), and inductors (L) only. Active analog frequencies, which is a difficult task for analog filters.
filters add electronic amplifiers to the R, L, C parts list. 8. They can be used in conjunction with data
However, these classic filters must be designed with re- compression (i.e., input and/or output) schemes.
alistic values for resistance, capacitance, and inductance. 9. Certain digital filters have outstanding phase
Because of the physical limitations of passive components, linearity.
analog filters have a limited frequency response. They also 10. They can have high accuracy and precision. The
perform poorly at very low or very high frequencies. Ana- precision of an analog filter generally does not
log filters often require extensive and costly alignment and exceed 60–70 decibels. The precision of a digital
adjustment and require maintenance throughout their use- filter can be extended simply by increasing word
ful life span. Linear phase behavior is difficult to achieve, length. For example, an n-bit digital word defines a
and the fact that analog filters cannot be programmed lim- dynamic range approximated by 6.0206n dB.
its their utility and design flexibility. The alternative digital 11. Digital filters do not require periodic alignment,
filter technology overcomes these limitations. Digital fil- which is necessary for analog filters. Furthermore,
ters are a by-product of the great advances in solidstate they do not “drift” with parameter aging or
electronics during the 1970s, which produced a wealth of environmental changes.
high-performance, low-cost electronic devices capable of 12. Because of their digital nature, digital filters are less
manipulating signals in an algorithmic manner not possi- sensitive to certain classes of noise that corrupt
ble with analog units. Advances in digital technology such analog filters (e.g., line frequency noise).
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
The disadvantages of digital filters are few: the most common method of converting an analog signal
to a discrete format.
1. They are subject to quantization errors. Shannon’s theorem (ca. 1948) states that if x(t) is a
2. They may exhibit some form of limit cycling. signal whose highest-frequency component is bounded by
3. Hardware development time may be longer. f N , and if x(t) is periodically sampled sot that the sample
4. Their design and synthesis may be more difficult period ts is less than or equal to ts < 1/2 f N (referred to
algebraically unless a general-purpose digital as the Nyquist sample rate), then x(t) can be interpolated
computer is used to support the design task. from its time series by the rule
∞
x(nts ) sin[ωc (t − nts )]
Many of these disadvantages can be overcome through x(t) = .
good digital design practices and familiarity with hard- n=−∞ ωc (t − nts )
ware, analysis protocols, and procedures. However, use of this interpolating equation is not practical,
General-purpose computers became available to sci- and the following simpler approximations to the equation
entists during the 1950s. At first, primitive filters were have been developed:
designed and implemented as computer programs. Some The zeroth-order hold (interpolator):
simply emulated the calculus operation of integration
by use of iterative data processing techniques. An iter- x(t) = x(n), t ∈ [nts , (n + 1)ts ].
ative calculation is one in which the output is a func- The first-order hold (interpolator):
tion of both the input and past outputs. Because of the
recurrence relationship, these algorithms would now be x(t) = x(n) + t[x(n + 1) − x(n)]/ts ,
called recursive filters. If the output can be written as a t ∈ [nts , (n + 1)ts ].
function of input or independent variables only, a filter
would be called nonrecursive or, sometimes, a transver- The zeroth-order hold is often called a sample and hold
sal filter. Digital filters can be used to smooth, interpo- circuit, while the first-order is called a linear interpolation.
late, condition, or analyze data. For example, the simple The analysis of a discrete system can be performed
recursive filter given by yn+1 = −0.9yn + xn amplifies sig- using transform methods. One of the popular discrete
nals of alternating sign and attenuates signals of constant transforms is the z-transform. It belongs to a class of al-
sign. Starting at y0 = 0, the signal xn = (−1)n produces gebraic operations referred to as impulse-invariant trans-
y1 = 1, y2 = −1.9, y3 = 2.71, and so forth, while xn = 1 forms, which means that the inverse transform gives back
yields y1 = 1, y2 = 0.1, y3 = 0.9, . . . Scientific data anal- the original sample values. The z-transform is related to
ysis tools of this type operate “off-line” and use floating- the Laplace transform variable s by z = exp(sts ). If {x(n)}
point arithmetic. However, they do not represent true DSP is a time series consisting of sample values x(n) = x(nts ),
statements since they were not designed with an aware- then its z-transform is given by
ness of the imprecision of digital arithmetic, memory
∞
limitations, computational complexity, data and control x(z) = x(n)z −n
flow, and the like. But it was not long before engineers n=−∞
and scientists were developing computer routines and al- Some of the basic properties of the z-transform are as
gorithms that did lend themselves to fast and efficient follows:
digital processing. This began the era of digital signal
processing. 1. The z-transform of {x(i)} exists if and only if {x(i)} is
unique and bounded for all n.
2. Two z-transforms X (z) and Y (z) are equal if and only
III. SIGNAL AND SYSTEM if {x(i)} ≡ {y(i)}.
REPRESENTATION 3. The z-transform of x(t/T ) is independent of the
sampling period T .
Signal processing is concerned with the problem of ma- 4. The z-transform is a linear operator in that if x(n) →
nipulating and managing information. The information X (z) and y(n) → Y (z), then if z(n) = x(n) + y(n), it
source (signal) may appear in one of several forms. A con- follows that Z (z) = X (z) + Y (z).
tinuous signal process is one that is continuously resolved 5. The shifting property holds in that if y(n) = x(n + m)
in both its independent and dependent axes. A discrete sig- and x(n) → X (z), then Y (z) = z m X (z).
nal process is discretely resolved (quantized) only in the 6. The initial and final value theorem states that if x(0) =
dependent variable. A digital signal process is discretely limX (z) as z → ∞, x(∞) = lim[1 − z −1 )X (z)] as
resolved in both variables. Periodic (uniform) sampling is z → 1.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
FIGURE 2 Relationship between s- and z-transforms in the complex plane. (a) Laplace or s domain, s = σ + i ω;
(b) z-transform domain, z = e st . The shaded s plane in (a) is the region that results in a unique and stable z-transform.
The shaded area of the z plane in (b) is the region of the stable z-transforms.
7. The linear convolution property holds in that if x(n) that the z-transform is unique if and only if the modulus of
→ X (z) and y(n) → Y (z), then z(n) = x(n) ∗ y(n) sts is bounded by ±π (i.e., |sts | ≤ π ). For s = jω = j2π f ,
implies that Z (z) = X (z)Y (z). A more formal it follows that |2π f ts | ≤ π or | f | ≤ 1/2ts , which is the
mathematical statement is Nyquist sampling frequency. It is sometimes useful to in-
∞ terpret the z-transform terms of parameters such as those
z(n) = x(k)y(n − k) shown in Table I.
k=−∞ Finally, the inversion of a given X (z) is formally defined
∞
Z
in terms of the contour integral
= x(n − k)y(k) ←→ X (z)Y (z).
k=−∞ −1 1 X (z)z n dz
x(n) = Z [X (z)] = ,
8. Parseval’s theorem becomes 2π j C z
∞
1 where C is a restricted closed path found in the z plane.
x(n)y(n) = X (z)Y (z −1 )z −1 dz. However, one rarely approaches the problem of inversion
k=−∞
2π j
through the use of this difficult integral equation. Instead,
9. Stability properties in the s domain transfer into the z more simplified methods have been developed that ex-
domain. The locations of stable s-domain pole values pedite the inversion process. The most popular of these
are given by s = σ + jω for σ < 0. Substituting this methods are
condition into the defining equation for a z-transform,
it follows that |z| = |exp(σ ts ) × exp( jωts )| ≤
TABLE I Parameters Used in Interpreting the
|exp(σ ts )| < 1 if σ < 0. That is, the stable complex z -Transform
values of z are bounded to be within the periphery of
the unit circle given by |z| = 1. This condition is often Parameter z Operator Range
referred to as the circle criterion and is summarized in
f , Hz z = e j2π f ts 0 ≤ f ≤ fs
Fig. 2.
ω, rad/sec z = e jωts 0 ≤ ω ≤ 2π f s
f = f / f s normalized to f s z = e j2π f 0≤ f ≤1
Properties 7 and 8 make the z-transform well suited for jf∗
f ∗ = f /0.5 normalized z=e 0≤ f∗ ≤2
representing systems as well as signals. to f Nyquist
There are some restrictions on the use of the z-trans- θ , rad z = e jθ 0 ≤ θ ≤ 2π
form. From the definition of z [i.e., z = exp(sts )] it is seen
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
1. Long division. the bilinear transform are due to its ability to distort the
2. Partial fraction or Heaviside expansion. frequency axis by a property known as warping. If rep-
3. Residue theorem of complex variables. resents the frequency axis of a continuous system whose
Laplace transform is H (z) and ω is that for the discrete
The standard z-transform is by no means the only map- system H (z), where H (s) → H (z) by using the bilinear
ping of the s domain to the z domain in practical use transform, then the warping equation is
today. Many others have been considered, and have en-
joyed a certain degree of popularity such as (1) the bilinear = (2/ts ) tan (ωts /2),
z-transform and (2) the matched z-transform. ω = (2/ts ) tan−1 ( ts /2).
The bilinear z-transform is defined by
The warping of the continuous frequency axis is shown
2(z − 1)
s= . in Fig. 3. If the design of a discrete filter is predicted on
ts (z + 1) knowledge of its continuous filter model H (s), then the
A strength of this transform is its ability to map a given final design specifications given along the discrete fre-
continuous signal having a Laplace representation H (s) quency axis are prewarped into the continuous frequency
into the z domain without the algebraically complex oper- axis. From this, the resulting continuous filter H (s) is
ations associated with the standard z-transform. However, designed.
the bilinear z-transform is not impulse invariant and, as a The matched z-transform is normally used to model
result, is not necessarily the transform of choice for time narrowband frequency-selective filters and is given by
domain analysis or synthesis. However, the frequency re-
sponse modeling ability of a bilinear transform system
N
i=1 (s − ai )
H (s) = ,
is superior to that of the standard z transformed system. N
i=1 (s − bi )
Here the metric of comparison is assumed to be the sim-
N
i=1 1 − exp(ai T )z −1
ilarity of the frequency response of an analog filter H (s) H (z) = .
and its discrete model H (z). The spectral properties of
N
i=1 1 − exp(bi T )z −1
FIGURE 3 Frequency and prewarping relationship between continuous ( ) and discrete (ω) frequency domains.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
However, in most frequency domain design applications where W N is a complex exponential W N = exp(− j2π/N )
the bilinear z-transform is used, and when time domain and is sometimes referred to as the twiddle factor. A few
response is the issue, the standard z-transform is preferred. of the more important DFT properties are
Number of
Researchers Date Sequence lengths N DFT values Application
C. F. Gauss 1805 Any composite integer All Interpolation of orbits of celestial bodies
F. Carlini 1828 12 — Harmonic analysis of barometric pressure
A. Smith 1846 4, 8, 16, 32 5 or 9 Correcting deviations in compasses on ships
J. D. Everett 1860 12 5 Modeling underground temperature deviations
C. Runge 1903 2n k All Harmonic analysis of functions
K. Stumpff 1939 2n k, 3n k All Harmonic analysis of functions
G. C. Danielson 1942 2n All X-ray diffraction in crystals
and C. Lanczos
L. H. Thomas 1948 Any integer with relatively prime factors All Harmonic analysis of functions
I. J. Good 1958 Any integer with relatively prime factors All Harmonic analysis of functions
J. W. Cooley and 1965 Any composite integer All Harmonic analysis of functions
J. W. Tukey
S. Winograd 1976 Any integer with relatively prime factors All Use of complexity theory for harmonic analysis
a After Herdeman, Johnson, and Burris, IEEE ASSP Magazine, October 1984.
if any, would now be attributed to aliasing or a phe- theorem of Fourier transforms states that multiplication in
nomenon called leakage, as suggested in Fig. 5. How- the time domain is equivalent to convolution in the fre-
ever, leakage does not occur when the signal period and quency domain, so it follows that X w (ω) = W (ω) ∗ X (ω).
sampling interval are related by T = mTp , where m is an The object of a window is to place X w (ω) in maximum
integer. agreement with X (ω), which implies that ideally W (ω) is
Finite aperture effects can often be minimized with the given by the Dirac delta distribution δD (ω). Since the con-
use of data windows. A data window modifies a raw volution of anything with a δD is simply a copy of the orig-
time series {x(n)} by using the memoryless rule xw (n) = inal, it would follow that X w (ω) ≡ X (ω). Unfortunately, if
w(n)x(n), where w(n) is the window function. The duality W (ω) = δD (ω), then w(t) ≡ 1 for all t ∈ (−∞, ∞), which
FIGURE 4 Comparison of Fourier and discrete Fourier spectra of a common signal. Note the differences between
DFT and Fourier transforms due to finite aperture effects in the spectra at the bottom.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
N
1 −1 N 2 −1
X (k1 , k2 ) = x(n 1 , n 2 )W Nn 11 k1 W Nn 22 k2 .
n 1 =0 n 2 =0
N 1 −1
N 2 −1
X (k1 , k2 ) = W Nn 11 k1 x(n 1 , n 2 )W Nn 22 k2
n 1 =0 n 2 −0
N 1 −1
= W Nn 12 k1 X (n 1 , k2 ),
n 1 =0
* y(n) =
x(n) x(n)y((n − i) mod N ) ρx(k)y = IDFT[X (k)Y ∗ (k)]/(|X (0)||Y (0)|)1/2
i=0
The coherence function is also defined in terms of the DFT
N −1
and can be used to answer the following questions:
= x((n − i) mod N )y(n).
i=0
1. Is the system linear?
In general, DFT(x(n) ∗ y(n)) = DFT(x(n) * y(n)). Nev- 2. It the initial state of the system zero?
ertheless, there are times when one would like to use the 3. Is the system forced by a single input rather than
computational power of the FFT to do linear convolution. multiple inputs?
When this is the case, a technique called zero padding
can be used to create two new zero-filled 2N -sample In addition, a number of other specialized system-oriented
time series {xZ (n)} and {h Z (n)}. Each consists of ex- tests have become practical due to the advent of the DFT
act images of the original N -sample time series {x(n)} (i.e., FFT) and DSP procedures. All of the methods have
and {h(n)}, which are appended with blocks of N zeros. evolved because of the development of efficient and fast
Using 2N -point DFTs and an IDFT, it is known that DSP computation tools.
IDFT(X ZF (k)HZF (k)) = x(n) ∗ h(n) (linear convolution)
over N contiguous samples. The other N sample values of
the 2N -point IDFT are artifacts and are discarded. To do VIII. DIGITAL FILTER DESIGN
convolution over a KN -sample time series with N -point
DFTs, zero padding techniques are used in conjunction One of the major branches of study in the digital signal
with data partitioning methods called overlap and save or processing field is that of digital filter design and analysis.
overlap and add. The design objective is to convert a discrete system, given
The DFT is used extensively to estimate the transfer in terms of its impulse response [h(n)] or transfer function
function H (k) of a linear system. The transfer function is H (z), into hardware or software statement. A filter’s hard-
given by the ratio H (k) = Y (k)/ X (k). The shape of the ware design is said to be its architecture. Filter analysis is
magnitude profile |H (k)| = SQRT[H (k)H ∗ (k)] defines concerned with issues of speed, precision, and complex-
the magnitude frequency response of the filter, while ity. It was noted earlier that if a digital filter is designed
φ(k) = arctan{Im[H (k)]/Re[H (k)]} establishes its phase without using signal feedback, it is called a nonrecursive
reponse. The derivative of the phase profile, approximated or transversal filter and can be mathematically represented
by φ(k) = φ(k) − φ(k − 1), is called the group delay and as
is often minimized as a design criterion in digital commu-
N −1
nication/telemetry applications. H (z) = h(n)z −n
The similarity-measuring autocorrelation function, n=0
with delay or lag parameter k, can be defined in terms if the impulse response is given by the N -element time
of the DFT as follows: series {h(0), h(1), . . . , h(N − 1)}. Some special types of
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
recursive filters also satisfy this condition, but they are ideal filter with a Chebyshev polynomial, which is used
rarely seen in practice. Because the filter exhibits a finite to define the individual values of {h(n)}. The ideal filter
impulse response, it is also called an FIR filter. The fre- is assumed to have unit gain in its passband(s) and zero
quency domain response can be computed by substituting gain in the stopband(s). This design process is now highly
z with ·z = exp( jω). In the frequency domain the FIR automated and is found in many software packages used
transform becomes H (ω) = |H (ω)| exp(φ(ω)), where commercially and in the public domain. Finally, FIR filters
|H (ω)| and φ(ω) are called the magnitude frequency and also possess a simple but possibly high-order hardware
phase response, respectively. The principal advantages architecture and software code. A direct implementation
of FIR filters are twofold: (1) they are simple to analyze of the transfer function is suggested in Fig. 8.
since they are defined by a finite linear equation, and If the impulse response of a filter is of infinite duration,
(2) they can be designed to have a phase response that the filter is called an infinite impulse response (IIR), and
satisfies the linear equation φ(ω) = aω + b, where b = 0, the response is given by
π/2, or −π/2. Linear phase filters are important in ap- ∞
plications where nonlinear phase destortion can degrade H (z) = h(n)z −n ,
a database or system performance. Some examples are n=0
digital communication line equalization, speech and which can often be factored into the form
image processing, and antialiasing filters to precondition
the data sent to a DFT. For a FIR filter to achieve linear
M
N
H (z) = bi z −i 1+ ai z −i .
phase performance, the filter must have a symmetric or i=0 i=1
antisymmetric finite impulse response. It is also common
to find the amplitude profile of a FIR filter overlaid by a = N (z)/D(z).
symmetric data window such as those introduced in the The feedforward information paths are characterized by
discussion of DFT. In such cases the original impulse N (z), while the feedback paths are specified by D(z). The
response {h(n)} is combined with a window function solutions to N (z) = 0 are called the zeros of filters; the
{w(n)} to form a new FIR given by {h (n)} = {h(n)w(n)}. are given by D(z) = 0. An IIR is said to be bounded
poles
An FIR filter can be designed to be frequency selective if |h(n)|, over all n, is finite. If the poles of an IIR are
as well. The four basic frequency selection options are low bounded within with the unit circle, then the filter’s forced
pass, high pass, bandpass, and bandstop (or bandreject). response is also known to remain bounded or stable. Many
A popular design procedure approximates the shape of an IIR filters are specified in terms of their optimal analog
FIGURE 8 Direct implementation of a linear phase FIR lowpass digital filter using shift registers, multipliers, and
adders. Phase is plotted between the principal angles ±π . (a) Impulse response, (b) architecture, and (c and d)
frequency domain performance.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
counterparts, which generally take one of the following Digital filters differ from their discrete counterparts
forms: H (z) in that they are designed with digital hardware, arith-
metic, and an n-bit data word. If floating-point arithmetic
Type Design optimality criteria is used, the dynamic range and precision are high but the
speed suffers. If speed is an important design issue, fixed-
Butterworth Maximally flat point arithmetic is usually adopted.
Chebyshev Equal-ripple passband A high-speed fixed-point design with a signed (n + 1)-
Elliptic Equal-ripple passband and stopband bit format is subject to quantization errors. Quantization
is a process by which a real variable is converted into a
It is assumed that the ideal filter model has unity gain finite-precision digital word. Errors introduced by approx-
in the passband and zero elsewhere. The maximally flat imating a real variable with its nearest binary-coded deci-
criterion refers to the slope (derivative) being zero at a mal (BCD) value manifest themselves as input, coefficient
known critical frequency. The equal ripple (or equiripple) quantization, or arithmetic errors. If a real input is defined
is sometimes referred to as a minimax criterion. In a mini- over ±V volts and is presented to an analog-to-digital
max design, the maximum deviation from the ideal model (ADC) converter for quantization, the quantization step
is minimal. The design of a Butterworth, Chebyshev, or size is given by Q ≤ 1/2n V/bit. The value of the quanti-
elliptic filter H (s) is now a classic study and has been zation error ε is normally defined as the difference between
computerized. Examples of classically defined IIRs are a real variable X and its nearest binary-valued representa-
presented in Fig. 9. tion, say X B . That is, ε = (X − X B ) and is usually modeled
Infinite impulse response digital filters are designed by to be of zero mean for rounding and ±Q/2 for truncation.
reflecting the desired filter parameters [i.e., H (z)] into the The error variance in both cases is Q 2 /12 = 2−2n /12. The
continuous-frequency domain [i.e., H (s)], where the filter error can be reduced by increasing n, but this results in a
equations are well known. This can be accomplished by slower design.
using one of two procedures, both of which begin with Arithmetic errors are the result of truncation or round-
definition of a desired Butterworth, Chebyshev, or ellip- ing. For example, the multiplication of two n-bit numbers
tic analog prototype filter, denoted Hp (s) in Fig. 10. For is a full-precision product of length 2n bits. This 2n-bit
prespecified passband and stopband gains Ap and Aa , the product is then rounded or truncated to its nearest n-bit
prototype filter satisfies |H (s)| = Ap at s = j1 (normal- value. The errors associated with this operation can be
ized frequency in radians per second) and has sufficient modeled in the manner used for input or coefficient error
order to ensure that the gain at the normalized stopband analysis. However, in an IIR arithmetic errors can recir-
frequency a = a / p satisfies |H ( a )| = Aa . The fre- culate within the filter through its feedback paths. As a
quency terms, denoted in this formula, are in fact the result, once inserted, these errors may never completely
prewarped version of the critical discrete frequencies ωp leave the filter. The effect of such errors on the frequency
and ωa as shown in Fig. 3. From that point on, the method response of a Butterworth or Chebyshev low-pass filter, as
proceeds as shown in Table IV (see Fig. 10). a function of word length n, is interpreted in Fig. 11. Many
Compared to an FIR, some general conclusions may be of the IIR forms developed over the past several decades
drawn about on IIR. First, if magnitude frequency response represent attempts to reduce the effects of these circulating
is to be sharp or abrupt in terms of a transition between errors and are called low-sensitivity filters. The error per-
passbands and stopbands (called the filter skirt), the IIR formance of some of the more commonly designed filters
is the design of choice. If phase performance is the de- is shown in Table V, where complexity can be translated
sign objective, an FIR should be chosen. The IIRs are into added cost or slower throughput.
generally of low order (N ≤ 16), while the FIRs are usu- Besides the computational inaccuracies associated with
ally of high order (N ≥ 16). For the IIR case, the derived finite-precision effects, other phenomena have been ob-
transfer function can be factored into a number of rec- served. The most important of these is limit cycling, which
ognizable forms. If H (z) = Hi (z), a serial or cascade appears in zero input limit cycling or overflow limit cy-
form is realized. If H (z) = Hi (z), a parallel design is cling. Zero input limit cycling is a granularity problem
achieved. The individual subsystems, denoted Hi (z), are in which the natural exponential decay of an impulse re-
normally designed as first- and second-order filters having sponse wants to move the output close to zero, but this
only real coefficients. These unique structures are called can never be achieved because of quantizing. For ex-
architectures. Other commonly found architectures are the ample, if h(n) = a n , then for a = −1/2, h(n) = 1, −1/2,
Direct II, canonic, ladder, Gray–Markel, wave, and con- 1/4, −1/8, 1/16, −1/32, . . . . However, if a signed 4-bit
tinued fraction, to name but a few. data word with format ±x x x is used, the response
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
FIGURE 10 Design procedures for Butterworth, Chebyshev, and elliptic IIR filters.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
is y4 (n) = 7/8, −1/2, 1/4, −1/8, 1/8, . . . , (−1)n /8, . . . . tipliers must be used to implement the filter’s coefficient.
This low-amplitude oscillation of ±1/8, for n = 4, is Fixed (nonadaptive) filters, in contrast, often store their
called zero input limit cycling, and it can result in an- coefficients in read-only memory (ROM) and use special
noying clicks in speech or telephone applications. Some wiring techniques to replace expensive general-purpose
filter architectures have been developed that trade off addi- multipliers with special-purpose ones.
tional hardware for suppressed limit cycle behavior. Over-
flow limit cycling can occur whenever a signal goes into a
register overflow condition, which occurs when the signal IX. IMPLEMENTATION
exceeds the admissible dynamic range of the numbering
system. For example, adding 1 to an n-bit maximal bi- Digital processing filters and algorithms are computa-
nary value [11 · · · 1] = 2n − 1 (decimal) would send the tionally intensive. Arithmetic is performed in a method
sum back to zero. Such a rapid and unexpected change that is consistent with chosen number systems. In gen-
in a variable’s value may cause bizarre behavior. This ef- eral, three types of numbering systems are used in DSP
fect can be reduced by using saturating arithmetic, which applications:
clamps a variable to a maximum (or minimum) admissible
value. 1. Weighted (e.g., decimal).
Two-dimensional digital FIR and IIR filters can also 2. Unweighted (e.g., residue).
be designed. If their impulse response h(n 1 , n 2 ) can 3. Homomorphic (e.g., logarithmic).
be factored as h(n 1 , n 2 ) = h [1] (n 1 )h [2] (n 2 ), the filter is
said to be separable. Separable filters lend themselves Unsigned weighted numbers take the general form
to one-dimensional analysis. For example, if h(n 1 , n 2 )
is separable, the 2D convolution sum can be expressed
N −1
X= ai r i , 0 ≤ X ≤ r N − 1,
as
i=0
y(m 1 , m 2 ) = h [1] (n 1 )h [2] (n 2 )x(m 1 −n 1 , m 2 −n 2 ) 0 ≤ ai ≤ r − 1,
n1 n2
where r is the radix and ai the ith significant digit,
= h [1] (n 1 ) h [2] (n 2 )x(m 1 −n 1 , m 2 −n 2 ) with a0 and a N −1 being the least and most significant,
n1 n2 respectively. Typical radices found in DSP applications
are r = 2 (binary), r = 8 (octal), r = 10 (decimal), and
= h (n 1 )y [2] (m 1 − n 1 , m 2 ).
[1]
FIGURE 11 Effect of finite arithmetic on the frequency response of digital filters. F.P.: Floating-point response, which
is equivalent to an infinite precision filter.
exponent, should be used. Implementation of floating- For hardware implementations, the primitive computa-
point arithmetic, though considerably more complex than tional unit is the two-input half adder or modulo-2 adder.
that of fixed-point arithmetic, is often required in scien- If this concept is extended to accept a third carry-in input,
tific calculations, where large dynamic ranges of data may then a full adder results. A group of full adder cells can
be encountered. Finally, residue arithmetic (unweighted) be integrated onto a single semiconductor chip to form
is a method of doing carry-free highly parallel arithmetic longer word length adders typically having widths of 4,
that dates back to 500 BC. In logarithmic systems (homo- 8, and 16 bits. If additional hardware is traded off for
morphic) multiplication is performed by adding exponents higher add speeds, then carry lookahead adders or CLAs
and is therefore fast. However, addition in this system is are used. Otherwise, slower ripple-type adders are chosen
slower and more complex than multiplication. because of their simplicity and low cost. Multipliers can
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
TABLE V Survey of IIR Filter Architectures have been developed. It is assumed that the speed at
which the operations of addition, subtraction, multipli-
Round-off
Coefficient hardware cation, or division can be performed is inversely related
Architecture type sensitivity error sensitivity Complexity to wordlength. Many of these methods emulate long-
standing communication modulation techniques advanced
Direct II High High Low in the 1950s. Most digital filters employ some form of
Continued fraction High High Low pulse code modulation (PCM), proposed in 1938 by A. H.
Cascade Low Higher than parallel Low Reeves. Differential PCM began its commercial life in
Parallel Low Lower than parallel Low 1955 with Bell’s T1 inverse DPCM operation, denoted
Wave Low Good Significant (DPCM)−1 . If the DPCM is modeled as a differential
Gray–Markel Low Moderate Moderate process, reconstruction would require discrete integra-
(ladder)
tion of the form x(n) = y(n) + y(n − 1). Explicitly, the
FIR–DPCM convolution sum can be expressed as
also be classified in terms of their speed–complexity met- [PCM] y(n) = ai x(n − i):
rics. The simplest, but slowest, is the shift-add multiplier,
which simply emulates the manual multiplication meth- [DPCM] y(n) − y(n − 1) = ai [x(n) − x(n − 1)],
ods learned in elementary school. Fast but more complex
units can be designed by using either a multibit technique which can be simplified to read y(n) = ai x(n). Un-
called Booth’s algorithm or a specially modified full adder der the proper set of conditions, the incremental quanti-
called a carry save unit. Examples of some commercially ties x( j) and y( j) can be amplitude compressed to
available multipliers are summarized in Table VI. n c bits for n c ≤ n. For example, if n = 16 and n c = 8, a
These relatively short word length add and multiply filter of conventional architecture would require 16 × 16
units are conveniently packaged, low in cost, and read- multipliers, compared to 8 × 8 for the DPCM. However, to
ily available. Longer word length fixed-point units may ensure that x does not exceed its allocated budget of n c
be designed by interconnecting the smaller units into ar- bits, the sampled values of x( j) and x( j − 1) must be sim-
rays. Floating-point units can be similarly designed with ilar. If they are not, an error-inducing phenomenon called
separate mantissa and exponent subsystems plus logic to slope overload will occur. This means that the sample rate
interconnect them. ts must often be increased to a value much greater than
Dedicated hardware arithmetic units are typically ex- the original ts to ensure that a sample x( j) has insufficient
pensive and complex. Slower but more flexible alterna- time to change to a distinctly different value x( j + 1). This
tives are micro-, mini-, or maxicomputers. Besides pro- is called oversampling. For example, an audio filter hav-
viding arithmetic support, programmable computers are ing a Nyquist frequency of 25 kHz, with a 16-bit PCM
capable of performing operations such as control, storage, architecture, may prove to be an inferior design based on
and logic. Where speed is a secondary issue, they have an 8-bit microprocessor being clocked at 100 kHz.
become very popular cost-effective DSP design media. A very popular form of DPCM is delta modulation
To overcome the latency problem of wide-wordlength (DM), which uses a one-bit {+1, −1} DPCM code. Delta
fixed-point multipliers, various data compression schemes modulation can be traced back to a 1946 French patent.
Speed Power
Vendora Size Pins (nsec) (W) Codingb Algorithm
The DM systems are generally inexpensive but may suffer The function φ is defined in terms of the known {a}
from severe slope overflow errors. As a result, high over- and {b} plus the L + 1 and M binary-valued elements
sample rates must be used. These systems also suffer from x(k, s) and y(k, s), respectively. These L + M + 1 binary
a phenomenon known as idle noise, which is caused by values can be concatenated into a single L + M + 1-bit
asymmetry in the two current source encoders (i.e., +1 word and used as an address to a ROM that contains the
and −1) usually found in DM systems and often results precomputed values of φ for all possible 2 L+M+1 ad-
in an audible disturbance. dresses. The sequence of table lookups of φ are then sim-
An adaptive DPCM (ADPCM) policy is sometimes ply scaled by 2k (binary shift) and sent to an adder for
used to counter the slope overload problem found in accumulation. Since L + M + 1 multiply and L + M − 1
DPCM systems. In an ADPCM system the quantization add cycles have been replaced by n fast table lookups
step size Q is adjusted in real time to reduce the occur- and (n − 1) shift-adds, the distributed filter can often run
rence of slope overload. As a result, the ADPCM system faster. However, it is nonprogrammable, and to change
oversample rates are typically lower than those found in the filter a new memory table lookup unit (i.e., φ) would
DPCM configurations. However, the price paid for those have to be programmed and installed. For example, if
added features is greater system complexity and cost. y(n) = 4x(n) + 3x(n − 1) + y(n − 1), x(n) = x(n − 1) =
Beginning in the mid-1970s, high-speed semiconductor 1 → 0001 y(n − 1) = 2 → 0010, n = 3, then y(n) = 9, as
memory-intensive filters became an alternative to the more shown in the insert at the bottom of this page.
traditional multiplier-intensive designs. This development Whether dedicated or programmable arithmetic logic
was motivated by the fact that fixed-point multiplication units (ALUs) are used, a hardware DSP system would also
can be a rather time-consuming operation compared to consist of memory and peripheral devices. Memory can
other digital computer operations. Further, many digital be broadly classified as read/write random access memory
filters are specified in terms of a set of fixed coefficients. (RAM) or read-only memory (ROM); ROM is often used
As a result, scaling rather than general multiplication is re- to store control programs and fixed coefficients, and RAM
quired. The distinction is that multiplication involves two to store data and variables. Most digital filters also require
variables, while scaling involves one. By combining these some sort of peripheral data acquisition support. Analog-
operations, a DSP unit known as the distributed arithmetic to-digital (ADC) and digital-to-analog (DAC) conversion
filter was developed. This filter is also referred to by the can be quantified in terms of
more confusing title of a bit-slice digital filter. The dis-
tributed filter concept is used to realize a fixed-coefficient 1. Accuracy: absolute accuracy of conversion relative to
linear shift-invariant filter of the form a standard voltage or current.
L
M 2. Aperture uncertainty: uncertainty in sampling rate.
y(n) = a(i)x(n − i) + b( j)y(n − j), 3. Bandwidth: maximum small-signal 3-dB frequency.
i=0 j=1 4. Codes: 2’s complement, 1’s complement,
where the {a} and {b} are known and L + M + 1 gen- sign-magnitude, etc.
eral multiplications are required of traditional designs. 5. Common mode rejection: measure of ability to
If the variables suppress unwanted noise at input.
are given an N -bit binary representa-
tion q(t) = kN=−10 q(k, t)2k , k = 0, 1, . . . , n − 1, where 6. Conversion time: maximum conversion rate. If a
q(k, t) denotes the kth bit of q at time t, then separate sample-and-hold amplifier is used, the
conversion rate is the maximum conversion rate of
L
N −1
the composite system.
y(n) = a(i)x(k, n − i)2k 7. Full scale: dynamic range in volts.
i=0 k=0
8. Glitch: undesirable noise spikes found in A/D output.
M
N −1
9. Linearity: linearity across dynamic range.
+ b( j)y(k, n − j)2k . 10. Offset drift: worst-case variation due to parametric
j=1 k=0
changes.
Distributed arithmetic owes its name to the fact that the 11. Precision: repeatability of measurement.
order of the above addition is redistributed as 12. Quantization error: ±(1/2) least significant bit
N −1 (LSB).
y(n) = 2k φ(n; k), 13. Resolution: 2−n .
k=0
L
M Analog-to-digital and digital-to-analog converters are
φ(n; k) = a(i)x(k, n − i) + b( j)y(k, n − j) . also classified in terms of word length and speed as sum-
i=0 j=1 marized in Table VII.
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
A useful variation of the standard ADC is the data com- and decreasing costs, can provide trend toward developed
pression or robust converter. With such a device, data in single-chip dedicated DSP units. Modern semiconductor
a large dynamic range can be compressed into a smaller technology is being used to implement high-performance
range. A commonly used data compression routine is the real-time dedicated systems for special purposes. Many
mulaw compander, given by of these applications are military in nature. Nonmilitary
applications, such as computer vision, speech processing,
y(x) = V log(1 + µx/V )/ log(1 + µ). and pattern recognition, have similar performance require-
Note that for µ = 0 the converter is linear; the value ments. Such systems are being developed that make use of
µ = 255 has been found useful in telephone applications. pipelined or distributed architectures to increase the speed
Digital-to-analog converters (DACs) are of major im- and capacity of a DSP design. Data flow and systolic struc-
portance as well. Some DAC units have a three-wire tures are also being developed for very high speed appli-
configuration that corrects for a difference between the cation; they typically trade additional complexity and data
ground potential of the DAC and the system it is driving. paths for speed and the simplicity of asynchronous control.
DACs can be extremely useful in supporting test, cali- For those designing high-volume telecommunications,
bration, and signal synthesis tasks. The monolithic DAC primitive speech synthesis, and low-order digital filters,
units are basically low in cost and simple to interface. a plethora of DSP chips are available. Many of these are
Some standard applications of DACs are in data display, hybrid (both analog and digital on a common silicon foun-
signal generation, and servo-motor control. dation), charge-coupled device (CCD), or switched capac-
Software and computer programs are also important itance devices. Other semicustom strategies, such as gate
tools in the DSP design process. A number of commer- arrays, standard cell, or master cell methods, can be used
cially available DSP system design and analysis packages to fill in the gaps between the custom and generic devices.
are marketed to do DFT and digital filter tasks. In devel- One of the technology-driven developments in DSP is
oping custom code for a DSP application, software engi- the DSP chip. Perhaps the most salient characteristic of
neering techniques are used that maximize cache memory a DSP chip is its multiplier. Since multipliers normally
hits (if cache is present) and reduce the execution latency consume a large amount of chip real estate, their de-
(delay) of a code. sign has been constantly refined and redefined. The early
AMI2811 had a slow 12 × 12 = 16 multiplier, while a later
TMS320 had a 16 × 16 = 32 200-nsec multiplier that oc-
X. HARDWARE cupied about 40% more area. These chips include some
amount of on-broad RAM for data storage and ROM for
The technical revolution of the mid 1970s produced the fixed coefficient storage. Since the cost of these chips is
tools needed to affect many high-volume real-time DSP very low (tens of dollars), they have opened many new
markets. These include medium, large, and very large in- areas for DSP penetration. Many factors, such as speed,
tegrated circuits (MSI, LSI, VLSI) plus DSP metal oxide cost, performance, software support and programming
semi-conductor (MOS) and bipolar devices. The ubiqui- language, debugging process. Today, a vast array of fixed
tous microprocessors, with their increasing capabilities and floating-point DSP chips are commercially available.
Ultra high speed >3 MHz Ultra high speed <100 nsec
Very high speed 300 kHz ← 3 MHz Very high speed 100 → 1 µsec
High speed 30 kHz ← 300 kHz High speed 1 → 10 µsec
Intermediate 3 kHz ← 30 kHz Intermediate 10 → 100 µsec
Low speed <3 kHz Low speed >100 µsec
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
Sosme maintain the low-cost uniprocessor architecture Since digital shift registers can be used effectively to
of the 1980s, others adopt more modern multiprocessor design delay lines, the physical design of correlators and
architecture. delay estimators is a manageable task. Because the de-
Although the DSP chip concept is exciting, it can- layed and manipulated data remain in a digital format,
not address all the important DSP problems. The DSP they can be directly passed to a digital processor for high-
chips are still too slow and imprecise to do some real- level analysis. The DSP hardware can also be used to build
time processing. Special processors are often required beam-forming and beam-steering FIR-like filters for sonar
to do fast “number crunching” DSP operations over a and radar applications.
large data base. Sometimes fast arithmetic chips and ASIC
cores can be integrated into the design to achieve the
B. Telecommunications and Audio
needed throughput. In more demanding applications such
as speech, geophysics, medicine, and pattern recognition, The telecommunications industry relied almost exclu-
array processors may be needed. Since most DSP opera- sively on analog devices prior to the 1960s. Systems were
tions are arithmetic intensive, the 10- to 1000-fold increase designed with amplifiers, capacitors, and inductors. Be-
in speed offered by an array processor can easily justify the ginning in the early 1960s, the industry began to convert to
substantial cost of these units. Finally, newer generations a digital format. Digital telecommunication systems have
of DSP hardware designs are concentrating on parallel several advantages in terms of speed and bandwidth over
or pipelined structures. These architectures are tuned to their analog ancestors. Early voice grade telecommunica-
specific applications to achieve maximum throughput. tion systems used pulse code modulation. In a PCM code
each bit represents a fixed amount of signal voltage. If the
least significant bit or zeroth bit has a weight of V volts,
XI. APPLICATIONS
the nth bit has a weight of 2n V volts. To achieve recog-
nizable voice quality speech, sampling at rates of 8000 or
A. Radar and Sonar Signal Processing
more samples per second over a 13-bit or greater dynamic
Radar is an important technology that affects all phases of range must often be used. To reduce the dynamic range re-
civilian and military aviation as well as space exploration. quirement, a logarithmic µ − 255 law data compander can
Whether the radars are pulsed or continuous wave, DSP be used to compress speech intelligence into an 8-bit word.
has become an important design and analysis tool. Be- Because of the high-volume need for voice-grade telecom-
cause of their speed and reliability, DSP methods have munications equipment, the commonly used companding
found a welcome home in Doppler and moving target ADCs are often found integrated onto low-cost highly reli-
processors. In addition, DSP can serve in target signa- able semiconductor chips. Other DSP devices have found
ture analysis applications, where a target’s spectral im- their way into the telecommunications industry in the form
age is analyzed to provide target-type information. The of signal generators, decoders, and voice synthesizers.
heart of many radar systems is a device called a matched By the mid-1970s more than 65 million miles of digital
filter. A matched filter, under some stationary signal as- telecommunications circuits were in operation. Because of
sumptions, maximizes the output signal-to-noise ratio. the high volume of components needed in the telecommu-
From this cleaner database, radar decisions are made. The nications industry, a number of custom integrated-circuit
matched filter convolution operations can be replaced by chips have been developed for use as multiplexers, digital
FFT operations. To achieve very high speeds, pipelined filters, and processors known as echo cancelers. The echo
FFTs are often used. These units are generally designed canceler removes the imbalance along a telephone circuit,
to have a high real-time bandwidth (i.e., transform-per- such as a satellite link, that can cause the phenomenon
second rate) and they operate as fast fixed-point or hybrid known as double-talk (an overlapped incoming and out-
number system processors rather than slower but more going signal or echo). It uses an adjustable or adaptive
precise floating-point units. FIR filter to reconstruct a copy of the unwanted reflected
There is some commonality between the principles of signal, which it subtracts from the outgoing signal before
sonar and radar. Sonar is based on acoustic waves and ap- transmission.
pears in both active and passive forms. As with radar, DSP Historically, the principal concern of the pre-1970 audio
tools can be used to design high-performance correlators engineer was sound reproduction. This field has now ex-
and matched filters. Time domain digital filters or FFT panded to include digital recording and reproduction, plus
methods can be used to implement sonar signal processes. digital information storage and transmission. Digital audio
Often the data are collected from an array of sensors, so a is somewhat distinct from digital speech processing in that
plurality of processing units would be integrated into the speech is a telecommunications problem in which the ap-
design. plied criterion of success is intelligibility. In digital audio,
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
the target is fidelity. In many cases, DSP technology has the Wigner, cepstrum, and smoothed cepstrum can also be
exceeded the fidelity capability of analog recording and used. If X (k) is the DFT of x(n), then the complex cep-
reproduction equipment and is virtually noise-free. Also, strum is given by x (n), where x (n) = IDFT[X (k)]
√ for
digital audio designs have eliminated some of the elec- X (k) = log X (k) = log |X (k)| + j arg X (k) ( j = −1)
tromechanical components that deteriorate with age—the and arg = arctan[Im X (k)/Real X (k)].
phonograph stylus and records, magnetic tape and heads, The design of integrated voice/data networks is pred-
and so forth. Digital equipment has also been introduced icated on minimizing the channel bandwidth (number
into the recording studio, since it needs much less manual of bits per second) necessary to transmit voice, while
attention and adjustment than analog units and is more maintaining high speech quality and intelligibility. Typi-
versatile. cally, the transmission of speech with pulse code modula-
Audio signal processing begins with an acoustic wave, tion requires about 64,000 bits/sec. Specialized adaptive
which is manipulated and possibly archieved. It is re- predictive coding (APC) and adaptive transform coding
turned to an acoustic form for listing. As a result, an ADC (ATC) techniques can reduce this to about 16,000 bits/sec
and DACs are used at the input–output level. A 16-bit with no loss in speech quality. Coding techniques below
ADC conversion of an analog signal, when played back 4000 bits/sec are based on a parametric speech synthesis
through a DAC, introduces no detectable noise (distor- models such as linear predictive coding (LPC) vocoders
tion) to a human listener. However, the longer the required and channel vocoders. A further reduction in a bit rate is
wordlength, the more expensive the DSP system and stor- possible with vector quantization rather than scalar quanti-
age medium. As a result, data compression techniques zation techniques. In the latter, each parameter to be trans-
are often applied to the digitized database. These include mitted is quantized independently of all other parameters.
PCM encoding and delta modulation techniques. Between In the former, a number of parameters are quantized to-
the 1-bit DM protocol and the n-bit PCM is the hybrid gether as a vector into a multidimensional space. Pattern
differential pulse code modulation, or DPCM. Here the recognition methods are then used to code and decode the
difference between two successive samples is coded as intelligence.
a low-order PCM word and reconstructed by a DM-like Speech synthesis refers to two distinct types of speech
incremental addition/subtraction rule. For simple speech- generation by computer and may take the form of stored-
type signals, these methods can achieve a good degree of parameters algorithms (e.g., the Speak & Spell educational
data compression. For complex music, their use is more toy using LPC parameters) or wave form coding. The first
difficult. is text-to-speech synthesis and requires that a computer
As analog tapes are played the quality of the record- phonetically “read” a scanned or stored text. The second is
ing degrades. Therefore, for studio as well as listener use, speech recognition, which refers to the ability of a machine
storing audio information on digital tape has advantages. to recognize or understand human speech. Most commer-
Digital storage of audio information may also include cial and research speech recognition systems today use
some redundant information for use in error correction a pattern-matching approach. In terms of current tech-
and detection. nology, speaker-dependent continuous speech rates in the
high 90th percentile can be achieved only over relatively
small vocabularies (tens of words). Large vocabulary, con-
C. Speech Processing
tinuous speech, and speaker-independent operation lie
A milestone in speech processing was the development of somewhere in the future. Finally, speaker recognition sys-
the vocoder, or voice coder, by Homor Dudley in 1939. His tems attempt to recognize people from their voice prints.
basic speech synthesis model is still in use today. In 1946 Present-day techniques can verify a speaker with high
the advent of the South spectrograph for short-term speech accuracy only under controlled conditions.
spectrum display marked the beginning of extensive use
of spectral analysis for speech processing. This field has
D. Image Processing
been growing rapidly ever since. Although they are not
yet able to perform as well as humans, speech process- Image processing attempts to extract or modify infor-
ing algorithms and machines are becoming commercially mation found in an image-dependent signal space. The
available. military is a long-time user and promoter of image pro-
Human speech is a rather short-term phenomenon that cessing theory, practice, and hardware. In medicine, many
changes dynamically. As a result, its spectral signature is diagnostic tools rely heavily on image processors or pro-
often studied with short-time Fourier transforms. Various vide fast analysis of a graphical database. Other image
windowing and data segmentation methods have been de- processing applications are in the consumer electronics in-
veloped to serve this need. Modified transforms such as dustry, law enforcement, geophysics, weather prediction,
P1: GQT/GLT P2: GPJ Final Pages
Encyclopedia of Physical Science and Technology EN014D-687 July 28, 2001 20:49
and the compression, transmission, and reconstruction of Image coding is used to compress the representation of
television-like signals. In the growing field of robotics, an image into as few bits as possible while maintaining
computer vision will play an important role. a specified degree of image intelligibility. For example, a
Image processing can be partitioned into four subordi- simple 1024 × 1024 image, resolved to 16 bits, requires
nate areas known as image restoration, enhancement, cod- a 224 -bit data field. Such figures can overtax a practical
ing, and analysis. If one assumes that an image has been communication channel or cause a memory buffer to over-
degraded due to transmission noise, poor optics, geomet- flow when large or multiple images are being processed.
ric distortion, movement, and so forth, restoration methods Therefore, much attention has been given to image data
are used to repair as much as the damage as possible. compression. Fortunately, a typical image contains a large
Detectors can convert the optical data found in the im- amount of redundant or correlated information. Redun-
age plane to an electronic format, which is then digitized. dant data can be eliminated by using one of a number of
The point spread function models the optical distortion in techniques that attempt to make trade-offs between pro-
the image plane and can be represented as a transfer func- cessing speed, memory requirements, and compression
tion. The point spread function shows how much of the capabilities. For example, if most of an image remains
information at an image plant “spreads” into neighboring unchanged from frame to frame, statistical methods sug-
areas in the image plane. Distortion can also be introduced gest that the data rate can be reduced toward an average
by the recording device, and noise can be added to com- of 1 bit/pixel. Furthermore, there is usually a considerable
munication channels. correlation between adjacent pixels (or picture elements),
A basic image restoration task is deblurring. An in- which can also be compressed out of the database. With
verting filter is sought to reduce the effect of noise and a DPCM system, a signal is replaced by an estimate of
produce a facsimile of the source. A difficult nonblurring its incremental change. Slowly varying signals are there-
restoration problem is that of deconvolution, which at- fore compressed into a small dynamic range. By using
tempts to recover an image from a set of projections. Often, this technique, an 8-bit encoded image can often be re-
as in the case of medical X-ray, electromagnetic interfer- placed with a 3-bit DPCM equivalent without a subjective
ence (EMI), and similar scanners, the signal represents loss of quality. However, if the image consists of sharp
the amount of energy detected in a lower-dimensional im- high-contrast edges, the DPCM will perform poorly.
age plane. Reconstructing the three- or two-dimensional More mathematically intense transform-domain image
image is analogous to sketching a human face from its compression methods have been developed that work
shadow. The energy in a two- or one-dimensional plane more directly on the correlation question. However, they
obtained by illuminating an absorbing three- or two- can be computationally intense and may degrade picture
dimensional object is a projection that can be expressed quality if not correctly applied.
in terms of its spectral signature. To reconstruct the im-
age with some degree of definition, multiple projections SEE ALSO THE FOLLOWING ARTICLES
are required. The projection slice theorem states that by
collecting a group of one-dimensional projections at var- COMPUTER ALGORITHMS • DIGITAL FILTERS • DIGITAL
ious angles a two-dimensional image can be constructed, SPEECH PROCESSING • IMAGE PROCESSING • NUMERICAL
and from the slices of two-dimensional images, a three- ANALYSIS • RADAR • SIGNAL PROCESSING, ANALOG •
dimensional images can be formed, and so forth. Fourier SIGNAL PROCESSING, GENERAL • VISION SENSORS FOR
and other types of fast transforms are generally used to do ROBOTS • Z-TRANSFORM
the data analysis because of the enormous amount of data
that must be assimilated to reconstruct an image from its
projections. BIBLIOGRAPHY
Enhancement is used to improve the ability to interpret
an image database. This is often a subjective measure and Antonious, A. (1979). “Digital Filters: Analysis and Design,” McGraw-
Hill, New York.
relies on human perception. It differs from restoration in Blahut, R. (1985). “Fast Algorithms for Digital Signal Processing,”
that restoration seeks to rebuild the original image from Addison-Wesley, Reading, MA.
its distorted image, while enhancement seeks to improve Oppenheim, A. V., and Schafer, R. (1975). “Digital Signal Processing,”
on the original. Often, due to transmission bandwidth Prentice-Hall, Englewood Cliffs, NJ.
limitations, an image may lose some of its definition. Oppenheim, A. V., ed. (1978). “Application of Digital Signal Process-
ing,” Prentice-Hall, Englewood Cliffs, NJ.
When assumptions must be made about the shape of an Rabiner, L. R., and Gold, B. (1975). “Theory and Applications of Digital
object, enhancement methods return some of the original Signal Processing,” Prentice-Hall, Englewood, Cliffs, NJ.
picture quality. Taylor, F. (1983). “Digital Filter Design Handbook,” Dekker, New York.