CAPACITY OF WIRELESS CHANNELS
The growing demand for wireless communication makes it important to
determine the capacity limits of the underlying channels for these systems.
These capacity limits dictate the maximum data rates that can be transmitted
over wireless channels with asymptotically small error probability, assuming
no constraints on delay or complexity of the encoder and Decoder.
The mathematical theory of communication underlying channel capacity
was pioneered by Claude Shannon in the late 1940s. This theory is based on
the notion of mutual information between the input and output of a channel.
In particular, Shannon defined channel capacity as the channel’s mutual
information maximized over all possible input Distributions.
In this chapter we examine the capacity of a single-user wireless channel
where transmitter and/or receiver have a single antenna.
We will discuss capacity for channels that are both time invariant and time
varying.
We first look at the well-known formula for capacity of a time-invariant
additive white Gaussian noise (AWGN) channel and then consider capacity
of time-varying flat fading channels.
REVIEW OF CAPACITY IN AWGN
Consider a discrete-time AWGN channel with channel input /output
relationship
y[i] = x[i] + n[i],
where
x[i] is the channel input at time i,
y[i] is the corresponding channel output,
n[i] is a white Gaussian noise random process.
Assume a channel bandwidth Band received signal power P.
The received signal-to-noise ratio (SNR) – the power in x[i]divided by the
power in n[i] – is constant and given by
γ = P/N0B,
where N0/2 is the power spectral density (PSD) of the noise.
The capacity of this channel is given by Shannon’s well-known formula
where the capacity units are bits per second (bps).
Shannon’s coding theorem proves that a code exists that achieves data rates
arbitrarily close to capacity with arbitrarily small probability of bit error.
The converse theorem shows that any code with rate R > C has a probability
of error bounded away from zero. The theorems are proved using the
concept of mutual information between the channel input and output.
For a discrete memoryless time-invariant channel with random input x and
random output y, the channel’s mutual information is defined as
Shannon proved that channel capacity equals the mutual information of the
channel maximized over all possible input distributions:
Shannon capacity is generally used as an upper bound on the data rates that
can be achieved under real system constraints.
At the time that Shannon developed his theory of information, data rates
over standard telephone lines were on the order of 100 bps.
Thus, it was believed that Shannon capacity, which predicted speeds of
roughly 30 kbps over the same telephone lines, was not a useful bound for
real systems.
Wireless channels typically exhibit flat or frequency-selective fading.
CAPACITY OF FLAT FADING CHANNEL
Channel and System Model –
We assume a discrete-time channel with stationary and ergodic time-varying gain
√g[i], 0 ≤g[i], and AWGN n[i], as shown in Figure 4.1.
The channel power gain g[i] follows a given distribution p(g)
In a block fading channel, g[i] is constant over some block length T, after
which time g[i] changes to a new independent value based on the
distribution p(g).
Let denote the average transmit signal power,
N0/2 the noise PSD of n[i]
B the received signal bandwidth.
The instantaneous received SNR is then
and its expected value over all time is
Since is a constant, the distribution of g[i] determines the
distribution of γ[i] and vice versa.
The system model is also shown in Figure 4.1, where an input message w is
sent from the transmitter to the receiver, which reconstructs an estimate wˆ
of the transmitted message w from the received signal.
The message is encoded into the codeword x, which is transmitted over the
time-varying channel as x[i] at time i.
The channel gain g[i], also called the channel side information (CSI),
changes during the transmission of the codeword.
The capacity of this channel depends on what is known about g[i] at the
transmitter and receiver.
We will consider three different scenarios regarding this knowledge as follows.
1. Channel distribution information (CDI): The distribution of g[i] is known to
the transmitter and receiver.
2. Receiver CSI: The value of g[i] is known to the receiver at time i, and both the
transmitter and receiver know the distribution of g[i].
3. Transmitter and receiver CSI: The value of g[i] is known to the transmitter
and receiver at time i, and both the transmitter and receiver know the distribution
of g[i].
Transmitter and receiver CSI allow the transmitter to adapt both its power
and rate to the channel gain at time i, leading to the highest capacity of the
three scenarios.
Note that since the instantaneous SNR γ [i] is just g[i] multipled by the
constant , known CSI or CDI about g[i] yields the same
information about γ [i].
Channel Side Information at Receiver
We now consider the case where the CSI g[i] is known to the receiver at
time,i. Equivalently, [i] is known to the receiver at time i.
We also assume that both the transmitter and receiver know the distribution
of g[i].
In this case there are two channel capacity definitions that are relevant to
system design:
Shannon capacity, also called ergodic capacity, and capacity with outage.
As for the AWGN channel, Shannon capacity defines the maximum data
rate that can be sent over the channel with asymptotically small error
probability.
Note that for Shannon capacity the rate transmitted over the channel is
constant: the transmitter cannot adapt its transmission strategy relative to the
CSI.
Thus, poor channel states typically reduce Shannon capacity because the
transmission strategy must incorporate the effect of these poor states.
An alternate capacity definition for fading channels with receiver CSI:
capacity with outage.
This is defined as the maximum rate that can be transmitted over a channel
with an outage probability corresponding to the probability that the
transmission cannot be decoded with negligible error probability.
The basic premise of capacity with outage is that a high data rate can be sent
over the channel and decoded correctly except when the channel is in a slow
deep fade.
By allowing the system to lose some data in the event of such deep fades, a
higher data rate can be maintained than if all data must be received correctly
regardless of the fading state, as is the case for Shannon capacity.
The probability of outage characterizes the probability of data loss or,
equivalently, of deep fading.
SHANNON (ERGODIC) CAPACITY
Shannon capacity of a fading channel with receiver CSI for an average
power constraint can be obtained
……………….(4.7)
Note that this formula is a probabilistic average: the capacity C is equal to
Shannon capacity for an AWGN channel with SNR γ, given by B log2(1+ γ)
and averaged over the distribution of γ.
That is why Shannon capacity is also called ergodic capacity.
However, care must be taken in interpreting (4.7) as an average.
In particular, it is incorrect to interpret (4.7) to mean that this average
capacity is achieved by maintaining a capacity B log2(1 + γ) when the
instantaneous SNR is γ, for only the receiver knows the instantaneous SNR γ
[i] and so the data rate transmitted over the channel is constant, regardless of
γ.
Note also that the capacity-achieving code must be sufficiently long that a
received codeword is affected by all possible fading states. This can result in
significant delay.
By Jensen’s inequality,
where is the average SNR on the channel.
Thus we see that the Shannon capacity of a fading channel with receiver CSI
only is less than the Shannon capacity of an AWGN channel with the same
average SNR.
In other words, fading reduces Shannon capacity when only the receiver has
CSI.
Moreover, without transmitter CSI, the code design must incorporate the
channel correlation statistics, and the complexity of the maximum likelihood
decoder will be proportional to the channel decorrelation time.
In addition, if the receiver CSI is not perfect, capacity can be significantly
decreased
CAPACITY WITH OUTAGE
Capacity with outage applies to slowly varying channels, where the
instantaneous SNR γ is constant over a large number of transmissions (a
transmission burst) and then changes to a new value based on the fading
distribution.
With this model, if the channel has received SNR γ during a burst then data
can be sent over the channel at rate B log2(1 + γ) with negligible probability
of error.
Since the transmitter does not know the SNR value γ, it must fix a
transmission rate independent of the instantaneous received SNR.
Capacity with outage allows bits sent over a given transmission burst to be
decoded at the end of the burst with some probability that these bits will be
decoded incorrectly.
Specifically, the transmitter fixes a minimum received SNR min and
encodes for a data rate C = B log2(1 + γ min).
The data is correctly received if the instantaneous received SNR is greater
than or equal to γ min.
If the received SNR is below γ min then the bits received over that
transmission burst cannot be decoded correctly with probability approaching
1, and the receiver declares an outage.
The probability of outage is thus Pout = p(γ < min).
The average rate correctly received over many transmission bursts is Cout =
(1 − Pout)B log2(1 + γmin) since data is only correctly received on 1 – Pout
transmissions.
The value of γ min is a design parameter based on the acceptable outage
probability.
Capacity with outage is typically characterized by a plot of capacity versus
outage, as shown in Figure 4.2.
In this figure we plot the normalized capacity C/B = log2(1 + γmin) as a
function of outage probability Pout = p( < min) for a Rayleigh fading
channel
( exponentially distributed) with = 20 dB.
We see that capacity approaches zero for small outage probability, due to the
requirement that bits transmitted under severe fading must be decoded
correctly, and increases dramatically as outage probability increases.
Note, however, that these high capacity values for large outage probabilities
have higher probability of incorrect data reception.
The average rate correctly received can be maximized by finding the γmin
(or, equivalently, the Pout) that maximizes Cout.