Equalization
52
Delay spread causes intersymbol interference (ISI), which can
cause an irreducible error when the symbol time is on the same
order as the channel delay spread ( .
Equalization defines any signal processing technique used at
the receiver to alleviate the ISI problem caused by delay
spread.
Mitigation of ISI is required when the symbol time Ts is on the
order of the channel’s rms delay spread .
Equalizer design must balance ISI mitigation with noise
enhancement, since both the signal and the noise pass through
the equalizer, which can increase the noise power.
Equalization…
53
Nonlinear equalizers suffer less from noise enhancement than linear
equalizers but typically entail higher complexity.
Equalizers require an estimate of the channel impulse or frequency
response to mitigate the resulting ISI.
Since the wireless channel varies over time, the equalizer must
Learn the frequency or impulse response of the channel (training)
Then update its estimate of the frequency response as the channel changes
(tracking)
The process of equalizer training and tracking is often referred to as
adaptive equalization, since the equalizer adapts to the changing
channel.
Equalizer training and tracking can be quite difficult if the channel is
changing rapidly.
Equalizer Noise Enhancement
54
Equalizer must balance ISI mitigation so that noise power in the received
signal is not enhanced.
Consider a simple analog equalizer shown in the figure above.
Consider a signal s(t) that is passed through a channel with frequency
response H(f).
At the receiver front end, white Gaussian noise n(t) is added to the signal
and so the signal input to the receiver is
Y(f ) = S(f)H(f) + N(f)
Equalizer Noise Enhancement…
55
where N(f ) is white noise with power spectral density (PSD) N0/2.
If the bandwidth of s(t) is B, then the noise power within the signal
bandwidth of interest is N0B.
In order to completely remove ISI, an analog equalizer is introduced in
the receiver that is defined by
The received signal Y(f ) after passing through this equalizer becomes
where N’(f) is colored Gaussian noise with power spectral density .
Equalizer Noise Enhancement…
56
Now all ISI is removed but it encounters noise enhancement under
certain conditions.
If H(f) has a spectral null (H(f0) = 0 for some f0) at any
frequency within the bandwidth of s(t), then the power of the
noise N’(f) is infinite.
Even without a spectral null, if some frequencies in H(f) are
greatly attenuated then the equalizer Heq(f ) = 1/H(f) will
greatly enhance the noise power at those frequencies.
In this case, even though the ISI effects are removed, the
equalized system will perform poorly because of its greatly
reduced SNR.
Equalizer Noise Enhancement…
57
Linear digital equalizers work by approximately inverting the
channel frequency response and thus have the most noise
enhancement.
Nonlinear equalizers do not invert the channel frequency
response, so they tend to suffer much less from noise
enhancement.
Equalizer Types
58
Equalization techniques fall into two broad categories: linear
and nonlinear.
Linear Techniques:
The simplest to implement and to understand conceptually.
Suffer from more noise enhancement than nonlinear equalizers and
hence are not used in most wireless applications.
Non-Linear Techniques:
Decision-feedback equalization (DFE) is the most common because it
is fairly simple to implement and usually performs well.
On channels with low SNR, the DFE suffers from poor performance. The
optimal equalization technique is maximum likelihood sequence
estimation (MLSE).
Equalizer Types…
59
Linear and nonlinear equalizers are typically implemented using
a transversal or lattice structure.
The transversal structure is a filter with N −1 delay elements and
N taps featuring tunable complex weights.
The lattice filter uses a more complex recursive structure.
Adaptive equalizers require algorithms for updating the filter
tap coefficients during training and tracking.
These algorithms generally incorporate trade-offs between
complexity, convergence rate, and numerical stability.
Equalizer Types…
60
Equalizers can also be categorized as symbol-by-symbol (SBS)
or sequence estimators (SEs).
SBS equalizers remove ISI from each symbol and then detect
each symbol individually.
All linear equalizers as well as the DFE are SBS equalizers.
Sequence estimators detect sequences of symbols, so the effect
of ISI is part of the estimation process.
Maximum likelihood sequence estimation is the optimal form of
sequence detection
Linear Equalizer
61
We assume a linear equalizer implemented via a 2L +1 = N-
tap transversal filter:
Linear Equalizer…
62
For a given equalizer size N, the equalizer design must specify
(i) the tap weights {wi}Li=−L for a given channel frequency
response and (ii) the algorithm for updating these tap weights as
the channel varies.
Even though the performance metric in wireless systems is outage
probability, it is difficult to optimize the equalizer coefficients
subject to this criteria.
Hence an indirect optimization that balances ISI mitigation with
the prevention of noise enhancement is employed.
Zero-Forcing (ZF) Equalizers
63
Zero-Forcing (ZF) Equalizers…
64
The samples {yn} input to the equalizer can be represented
based on the combined impulse response f(t) = h(t) g (−t) as
where Ng(z) is the z-transform of the noise samples at the output of
the matched filter G m(1/z ) and
The zero-forcing equalizer removes all ISI introduced in the
composite response f(t).
Zero-Forcing (ZF) Equalizers…
65
To accomplish this, the equalizer impulse response should be
The power spectrum N(z) of the noise samples at the equalizer
output is given by
Zero-Forcing (ZF) Equalizers…
66
If the channel H(z) is sharply attenuated at any frequency within
the signal bandwidth of interest – as is common on frequency-
selective fading channels – the noise power will be significantly
increased.
This requires an equalizer design that better optimizes between
ISI mitigation and noise enhancement. One such equalizer is
the MMSE equalizer.
Since linear equalizers are implemented as transversal tap
filters, we can find a set of coefficients {wi} that best
approximates the zero-forcing equalizer.
Problem
67
Q) Consider a channel with impulse response
Find a two-tap ZF equalizer for this channel.
SOLUTION:
Sampled version of h(t),
h[n] = 1+ e−Ts/τδ[n −1] + e−2Ts/τδ[n − 2]+· · ·
−1
H(z) = 1+ e−Ts/τz + e−2Ts/τz−2 + e−3Ts/τz−3 +· · ·
Problem…
68
Heq(z) = 1/H(z) = 1 − e−Ts/τz −1.
The two-tap ZF equalizer therefore has tap weight coefficients w0 = 1
and w1 = e−Ts/τ.
Minimum Mean-Square Error
(MMSE) Equalizers
69
The goal of the MMSE equalizer design is to minimize the
average mean square error (MSE) between the transmitted
symbol dk and its estimate at the output of the equalizer.
i.e. the {wi} are chosen to minimize E[ ]2.
Since the MMSE equalizer is linear, its output is a linear
combination of the input samples y[k]:
The noise input to the equalizer has a power spectrum
N0|G m(1/z )|2.
Minimum Mean-Square Error
(MMSE) Equalizers…
70
Therefore the equalizer response can be expanded into two
components
A noise-whitening component 1/G∗m(1/z∗)
An ISI-removal component
Minimum Mean-Square Error
(MMSE) Equalizers…
71
The purpose of the noise-whitening filter is to whiten the noise so
that the noise component output from this filter has a constant
power spectrum.
Since the noise input to this filter has power spectrum
N0|G m(1/z )|2, the appropriate noise-whitening filter is
1/G m(1/z ).
The noise power spectrum at the output of the noise-whitening
filter is then N0|G m(1/z )|2/|G m(1/z )|2 = N0.
We assume that the filter , with input vn, is a linear filter
with N = 2L +1 taps:
Minimum Mean-Square Error
(MMSE) Equalizers…
72
Our goal is to design the filter coefficients {wi} so as to minimize
E[dk − ]2.
Let vT = (v[k+L], v[k+L−1], . . . , v[k−L]) = (vk+L, vk+L−1, . . . , vk−L)
be the row vector of inputs to the filter used to obtain
the filter output and define wT = (w−L, . . . ,wL) as the row
vector of filter coefficients.
Let J be the mean square error to be minimized.
Minimum Mean-Square Error
(MMSE) Equalizers…
73
Define Mv = E[vvH] and vd = E[vHdk].
The matrix Mv is an N × N Hermitian matrix, and vd is a length-N
row vector.
Assume E|dk|2 = 1. Then the MSE - J is
We obtain the optimal tap vector w by setting the gradient wJ
= 0 and solving for w.
Minimum Mean-Square Error
(MMSE) Equalizers…
74
Setting this to zero yields wTMv = vd or, equivalently, that the
optimal tap weights are given by
Substituting in these optimal tap weights, we obtain the minimum
mean-square error as
For an equalizer of infinite length, vT = (vn+∞, . . . , vn, . . . , vn−∞)
and wT = (w−∞, . . . , w0, . . . ,w∞).
Minimum Mean-Square Error
(MMSE) Equalizers…
75
Then wTMv = vd can be written as
Taking z-transforms and noting that is the z-transform of
the filter coefficients w yields
Solving for , we obtain
Minimum Mean-Square Error
(MMSE) Equalizers…
76
Since the MMSE equalizer consists of the noise-whitening filter
1/G m(1/z ) plus the ISI removal component , it follows
that the full MMSE equalizer
Adaptive Equalizers: Training
and Tracking
77
The ZFE and MMSE are designed based on a known value of the
combined impulse response h(t) = g(t) c(t).
Mostly, the channel c(t) in generally not known when the
receiver is designed. So the equalizer must be tunable so it can
adjust to different values of c(t).
Also in wireless channels, c(t) will change over time, the system
must periodically estimate the channel c(t) and update the
equalizer coefficients accordingly.
This process is called equalizer training or adaptive
equalization.
The equalizer can also use the detected data to adjust the
equalizer coefficients, a process known as equalizer tracking.
Adaptive Equalizers: Training
and Tracking…
78
Equalizers that do not use training and learn the channel
response via the detected data only are called blind
equalizers.
During training, the coefficients of the equalizer are updated at
time k based on a known training sequence [dk−M, . . . , dk] that
has been sent over the channel.
The length M +1 of the training sequence depends on the
number of equalizer coefficients that must be determined and
the convergence speed of the training algorithm.
The equalizer must be retrained when the channel
decorrelates – that is, at least every Tc seconds, where Tc is the
channel coherence time.
Adaptive Equalizers: Training
and Tracking…
79
If the training algorithm is slow relative to the channel coherence
time then the channel may change before the equalizer can
learn the channel.
In this case equalization is not an effective counter measure for
ISI, and some other technique (e.g., multicarrier modulation or
CDMA) is needed.
Let { } denote the bit decisions output from the equalizer given
a transmitted training sequence {dk}.
Our goal is to update the N equalizer coefficients at time k +1
based on the training sequence we have received up to time k.
Adaptive Equalizers: Training
and Tracking…
80
A wide range of algorithms exist to adapt the filter coefficients:
MMSE(Minimum Mean Square Error): More complex but convergence
is very fast.
LMS(Least Mean Square): Reduced complexity, but convergence is
slow for small values of . For large values of , the algorithm
becomes unstable.
RLS(Root Least Square): Complexity and performance lies between
that of MMSE and LMS algorithms.
The symbol decisions output from the equalizer are typically
passed through a threshold detector to round the decision to the
nearest constellation point.
Adaptive Equalizers: Training
and Tracking…
81
The resulting roundoff error can be used to adjust the equalizer coefficients
during data transmission, a process called equalizer tracking.
Tracking is based two premises:
That if the roundoff error is nonzero then the equalizer is not perfectly
trained.
That the roundoff error can be used to adjust the channel estimate inherent in
the equalizer.
The equalizer output bits and threshold detector output bits are used
to adjust an estimate of the equivalent lowpass composite channel F(z).
The updated version of F(z) is then taken to equal the composite channel
and used to update the equalizer coefficients accordingly.
LMS Algorithm
82
The criterion used is the minimization of the mean square
error(MSE) between the desired and the actual equalizer output.
LMS Algorithm…
83
The prediction error is given by
ek = dk - = xk -
ek = xk – wkTyk = xk – ykTwk
The mean squared error at time instant k is calculated as
= E[ek*ek] = E[ek2]
The LMS algorithm aims at minimizing the mean squared error.
|ek|2 = xk2 + wkTykykTwk -2xkykTwk
E[|ek|2] = E[xk2] + wkTE[ykykT]wk -2E[xkykT]wk
Let p denote the cross correlation vector between the desired
response and the input signal.
LMS Algorithm…
84
p = E[xkyk] = E[xkyk xkyk-1 xkyk-2 …. xkyk-N ] T
*
Let R denote the input correlation matrix E[ykyk ] called the input
covariance matrix.
Mean square error, = E[xk ] + w Rw -2p w
2 T T
It can be seen that the MSE is a function of wN.
Let the cost function J(wN) denote the MSE as a function of wN.
In order to minimize MSE
J(wN) = -2pN +2RNNwN
RNN =pN
The above equation is called normal equation and the optimum
weight vector is obtained as = RNN-1pN
LMS Algorithm…
85
Inverting the square matrix requires a large number of
arithmetic operations.
In practice, the minimization of MSE is carried out by use of
stochastic gradient descent algorithm introduced by Widrow.
This is more commonly known as Least Mean Square (LMS)
algorithm.
The LMS algorithm requires only 2N + 1 arithmetic operations
per iteration.
The filter weights are updated by the update equation given
below:
LMS Algorithm…
86
The filter weights are updated by the update equation given
below:
where n denotes the sequence of iteration, N denotes the number
of delay stages in the equalizer and is the step size which
controls the convergence rate and stability of the algorithm
To prevent the adaptation from being unstable, the value of
is chosen as 0 < < where is the ith Eigen value of
𝐢
RNN.