Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
45 views53 pages

Demodulation / Detection

The document discusses digital signal detection in Gaussian noise. It describes how a receiver performs demodulation to recover a sampled waveform and detection by comparing the test statistic to a threshold. It further explains that the optimal filter for maximizing the signal-to-noise ratio is a matched filter, whose impulse response is the time-reversed version of the signal waveform.

Uploaded by

waseem ur rehman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views53 pages

Demodulation / Detection

The document discusses digital signal detection in Gaussian noise. It describes how a receiver performs demodulation to recover a sampled waveform and detection by comparing the test statistic to a threshold. It further explains that the optimal filter for maximizing the signal-to-noise ratio is a matched filter, whose impulse response is the time-reversed version of the signal waveform.

Uploaded by

waseem ur rehman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Demodulation / Detection

 CHAPTER 3
 Detection of Binary Signal in Gaussian Noise
 Matched Filters and Correlators

 Bayes’ Decision Criterion

 Maximum Likelihood Detector

 Error Performance
Demodulation and Detection
AWGN

DETECT
DEMODULATE & SAMPLE
SAMPLE
at t = T
RECEIVED
WAVEFORM FREQUENCY
RECEIVING EQUALIZING
DOWN
FILTER FILTER THRESHOLD MESSAGE
TRANSMITTED CONVERSION
WAVEFORM COMPARISON SYMBOL
OR
CHANNEL
FOR COMPENSATION
SYMBOL
BANDPASS FOR CHANNEL
SIGNALS INDUCED ISI

OPTIONAL

ESSENTIAL

Figure 3.1: Two basic steps in the demodulation/detection of digital signals

The digital receiver performs two basic functions:


 Demodulation, to recover a waveform to be sampled at t = nT.
 Detection, decision-making process of selecting possible digital symbol
Detection of Binary Signal in Gaussian Noise
2

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20
Detection of Binary Signal in Gaussian Noise

 For any binary channel, the transmitted signal over a symbol interval
(0,T) is:
s0 (t ) 0  t  T for a binary 0
si (t )  
 s1 (t ) 0  t  T for a binary 1

 The received signal r(t) degraded by noise n(t) and possibly


degraded by the impulse response of the channel hc(t), is

r ( t )  s i ( t ) * hc ( t )  n ( t ) i  1, 2 (3.1)
Where n(t) is assumed to be zero mean AWGN process
 For ideal distortionless channel where hc(t) is an impulse function
and convolution with hc(t) produces no degradation, r(t) can be
represented as:
r (t )  s (t )  n(t ) i  1,2
i 0t T (3.2)
Detection of Binary Signal in Gaussian Noise

 The recovery of signal at the receiver consist of two parts


 Filter
 Reduces the received signal to a single variable z(T)
 z(T) is called the test statistics
 Detector (or decision circuit)
 Compares the z(T) to some threshold level 0 , i.e.,
H 1

z (T ) 
 where H1 and H0 are the two
 0
H 0 possible binary hypothesis
Receiver Functionality
The recovery of signal at the receiver consist of two parts:
1. Waveform-to-sample transformation
 Demodulator followed by a sampler
 At the end of each symbol duration T, predetection point yields a
sample z(T), called test statistic
z(T )  a (t)  n (t ) i  1,2
i 0
(3.3)

Where ai(T) is the desired signal component,


and no(T) is the noise component
2. Detection of symbol
 Assume that input noise is a Gaussian random process and
receiving filter is linear

1  1  n0  
2

p ( n0 )  exp      (3.4)
 0 2  2   0  
 Then output is another Gaussian random process

1  1  z  a0  
2

p(z | s0 )  exp    
 0 2  2   0  

1  1  z  a1  
2

p( z | s1 )  exp    
 0 2  2   0  
Where 0 2 is the noise variance
 The ratio of instantaneous signal power to average noise power ,
(S/N)T, at a time t=T, out of the sampler is:
 S  a i2
   (3.45)
 N T  02

 Need to achieve maximum (S/N)T


Find Filter Transfer Function H0(f)

 Objective: To maximizes (S/N)T


 Expressing signal ai(t) at filter output in terms of filter transfer
function H(f)

a i (t )  

H ( f ) S ( f ) e j 2  ft df (3.46)

where S(f) is the Fourier transform of input signal s(t)


 Output noise power can be expressed as:
N0 
  
2
0 | H ( f ) | 2 df
2  (3.47)
 Expressing (S/N)T as:
 2


j 2  fT
H ( f ) S( f ) e df
 S  
  
 N T N0 
(3.48)
2 

| H ( f ) | 2 df
 For H(f) = H0(f) to maximize (S/N)T, ; use Schwarz’s Inequality:

 2  2  2

 
f1 ( x) f 2 ( x)dx  

f1 ( x) dx  
f 2 ( x) dx (3.49)

 Equality holds if f1(x) = k f*2(x) where k is arbitrary constant and *


indicates complex conjugate
 Associate H(f) with f1(x) and S(f) ej2 fT with f2(x) to get:

 2  2  2


H ( f ) S ( f ) e j 2fT df   H ( f ) df
 
S ( f ) df (3.50)

 Substitute in eq-3.48 to yield:


S  2  2
  
 N T N 0

S ( f ) df (3.51)
 S  2E
 Or max    and energy E of the input signal s(t):
 N T N0
 2
 Thus (S/N)T depends on input signal energy E
and power spectral density of noise and
 

S ( f ) df

NOT on the particular shape of the waveform

S  2E
 Equality for max    holds for optimum filter transfer
 N T N 0
function H0(f)
such that:
H ( f )  H 0 ( f )  kS * ( f ) e  j 2fT (3.54)

h ( t )    1 kS * ( f ) e  j 2  fT  (3.55)

 For real valued s(t):  kS (T  t ) 0  t  T


h (t )   (3.56)
0 else where
 The impulse response of a filter producing maximum output signal-
to-noise ratio is the mirror image of message signal s(t), delayed by
symbol time duration T.
 The filter designed is called a MATCHED FILTER

 kS (T  t ) 0  t  T
h (t )  
0 else where

 Defined as:
a linear filter designed to provide the maximum
signal-to-noise power ratio at its output for a given
transmitted symbol waveform
Correlation realization of Matched filter
 A filter that is matched to the waveform s(t), has an impulse
response
 kS (T  t ) 0tT
h (t )  
0 else where
 h(t) is a delayed version of the mirror image (rotated on the t = 0
axis) of the original signal waveform

Signal Waveform Mirror image of signal Impulse response of


waveform matched filter
Figure 3.7
 This is a causal system
 Recall that a system is causal if before an excitation is applied at

time t = T, the response is zero for -  < t < T


 The signal waveform at the output of the matched filter is
t (3.57)
z (t )  r (t ) * h (t )  0
r ( )h ( t   ) d 

 Substituting h(t) to yield:

r ( ) s T  ( t   ) d 
t
z (t )  0

r ( ) s T  t   d 
t
 0 (3.58)
 When t=T,
T
z (t )   0
r ( ) s ( ) d 
(3.59)
 The function of the correlator and matched filter are the same

 Compare (a) and (b)


T
 From (a)
z (t )  0
r ( t ) s ( t ) dt

T
z (t ) t T  z (T )   r ( ) s( )d
0
From (b)  t

z' (T )  r(t) *h(t)   r( )h(t  )d   r( )h(t  )d
 0

But
h(t)  s(T  t)  h(t  )  s[T  (t  )]  s(T   t)
t
 z ' (t )   r ( ) s (  T  t ) d
0

 At the sampling instant t = T, we have


T T
z' (t ) t T  z' (t )   r ( )s(  T  T )d   r ( )s( )d
0 0

 This is the same result obtained in (a)


T
z ' (T )  
0
r ( ) s ( ) d 
 Hence
z(T )  z' (T )
Detection
 Matched filter reduces the received signal to a single variable z(T), after
which the detection of symbol is carried out
 The concept of maximum likelihood detector is based on Statistical
Decision Theory
 It allows us to
 formulate the decision rule that operates on the data

 optimize the detection criterion

H 1

z (T ) 

 0
H 0
Probabilities Review

 P[s0], P[s1]  a priori probabilities


 These probabilities are known before transmission
 P[z]
 probability of the received sample
 p(z|s0), p(z|s1)
 conditional pdf of received signal z, conditioned on the class si
 P[s0|z], P[s1|z]  a posteriori probabilities
 After examining the sample, we make a refinement of our
previous knowledge
 P[s1|s0], P[s0|s1]
 wrong decision (error)
 P[s1|s1], P[s0|s0]
 correct decision
How to Choose the threshold?
 Maximum Likelihood Ratio test and Maximum a posteriori (MAP)
criterion:
If

p ( s0 | z )  p ( s1 | z )   H 0
else
p ( s1 | z )  p ( s0 | z )   H 1

 Problem is that a posteriori probability are not known.


 Solution: Use Bay’s theorem:
p( z | s ) p(s )
p(s | z)  i i
i p(z)

H1 H1
p( z | s1 ) P(s1 ) p( z | s0 ) P ( s0 )
 

 p( z | s1) P(s )
1 
p( z | s0 ) P(s0 )
P( z ) H0
P( z) H0
 MAP criterion:
H1
p ( z | s1 ) P (s0 )
L( z)  

 likelihood ratio test ( LRT )
p( z | s0 ) H0
P ( s1 )

 When the two signals, s0(t) and s1(t), are equally likely, i.e., P(s0) =
P(s1) = 0.5, then the decision rule becomes
H1
p ( z | s1 )
L( z)  

1  max likelihood ratio test
p( z | s0 ) H0

 This is known as maximum likelihood ratio test because we are


selecting the hypothesis that corresponds to the signal with the
maximum likelihood.

 In terms of the Bayes criterion, it implies that the cost of both types
of error is the same
 Substituting the pdfs

1  1  z  a0  
2

H0 : p( z | s0 )  exp    
 0 2  2   0  

1  1  z  a1  
2

H1 : p ( z | s1 )  exp     
 0 2  2   0  

H1 1  1 2 H1
exp   z  a1  
p ( z | s1 )   0 2  2 0  
L( z)  1 1
p ( z | s0 )  1  1 2 
exp   z  a 0  
H0  0 2  2 0  H0
 Hence:

 z ( a1  a 0 ) ( a12  a 02 )  
exp    1
 02
2 02
 

 Taking the log of both sides will give

H1
z (a1  a0 ) (a12  a02 ) 
  ln{L( z )}   0
02
2 02

H0

H1
z ( a1  a 0 )  ( a12  a 02 ) ( a1  a 0 )( a1  a 0 )
 
0 2
 2 02
2 02
H0
 Hence

H1 H1
  02 (a1  a0 )(a1  a0 )  ( a1  a0 )
z z 0
 2 02 (a1  a0 )  2
H0 H0

where z is the minimum error criterion and  0 is optimum threshold


 For antipodal signal, s1(t) = - s0 (t)  a1 = - a0

H1

z 0

H0
This means that if received signal was positive, s1 (t) was sent,
else s0 (t) was sent
Probability of Error
 Error will occur if
 s1 is sent  s0 is received

P ( H 0 | s1 )  P (e | s1 )
0
P (e | s1 )   p ( z | s1 ) dz

 s0 is sent  s1 is received
P ( H 1 | s0 )  P (e | s0 )

P (e | s0 )   0
p ( z | s 0 ) dz
 The total probability of error is the sum of the errors
2
PB   P (e, si )  P ( e | s1 ) P ( s1 )  P (e | s0 ) P ( s0 )
i 1

 P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s0 ) P ( s0 )
 If signals are equally probable
PB  P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s0 ) P ( s 0 )
1
 P ( H 0 | s1 )  P ( H 1 | s0 ) 
2
1
PB  P( H 0 | s1 )  P( H1 | s0 ) bySymmetry
 P( H1 | s0 )
2
 Hence, the probability of bit error PB, is the probability that an
incorrect hypothesis is made
 Numerically, PB is the area under the tail of either of the conditional
distributions p(z|s1) or p(z|s2)
 
PB   0
P ( H 1 | s 0 ) dz   0
p ( z | s 0 ) dz

 1  1 za 
2

  0 0 2
exp   
 2   0
0


 dz

 1  1za 
2

PB   exp    0
  dz
0 
0 2  2   0  
( z  a0 )
 u
0
 1  u2 
 ( a1  a 0 )
2 0 2
exp  
 2 
 du

 The above equation cannot be evaluated in closed form (Q-function)


 Hence,
 a1  a 0 
PB  Q    equation B .18
 2 0 
1  x2 
Q ( x)  exp   
x 2  2
A vector View of Signals and Noise

 N-dimensional orthonormal space characterized by N linearly


independent basis function {ψj(t)}, where:

T  1 if i  j
 0
 i ( t ) j ( t ) dt  
 0 if i  j

 From a geometric point of view, each ψj(t) is mutually


perpendicular to each of the other {ψj(t)} for j not equal to k.
 Representation of any set of M energy signals { si(t) } as a linear
combinations of N orthogonal basis functions where N  M.

N
0  t  T
si (t )  a ij j (t ) 
j 1  i  1, 2 ,..., M
where:

T  i  1,2,...,M
aij   si (t ) j (t ) dt 
0
 j  1,2,..., N
 Therefore we can represent set of M energy signals {si(t) } as:

si  ( a i1 , a i 2 , ....... a iN ) i  1, 2,..., M
 Waveform energy:
N N
[  a ij ( t ) j ( t )] 2  
T T
Ei  0
s i2 ( t ) dt  
0
j 1 j 1
a ij2 ( t )

Representing (M=3) signals, with (N=2) orthonormal basis functions


Question 1: Why use orthormal functions?
 In many situations N is much smaller than M. Requiring few
matched filters at the receiver.

 Easy to calculate Euclidean distances

 Compact representation for both baseband and passband


systems.

Question 2: How to calculate orthormal


functions?

 Gram-Schmidt orthogonalization procedure.


 Examples
 Examples (continued)
Generalized One Dimensional Signals

 One Dimensional Signal Constellation

A2  A2
Eavg   A2
2
9A2  A2  A2 9A2
Eavg   5A2
4

49A2  25A2  9 A2  A2  A2  9 A2  49A2  25A2


Eavg   21A2
8
 Binary Baseband Orthogonal Signals
 Binary Antipodal Signals

A2  A2
E avg   A2
2
 Binary orthogonal Signals

A2  A2
E avg   A2
2
Constellation Diagram
 Is a method of representing the symbol states of modulated
bandpass signals in terms of their amplitude and phase
 In other words, it is a geometric representation of signals
 There are three types of binary signals:
 Antipodal
 Two signals are said to be antipodal if one signal is the
negative of the other  s ( t )   s ( t )
1 0
 The signal have equal energy with signal point on the real
line
EE
E avg  E
2
 ON-OFF
 Are one dimensional signals either ON or OFF with
signaling points falling
 on the real line
 With OOK, there are just 2 symbol states to map onto the
constellation space
 a(t) = 0 (no carrier amplitude, giving a point at the origin)
 a(t) = A cos wct (giving a point on the positive horizontal axis
at a distance A from the origin)

0 E E
Eavg  
2 2
 Orthogonal
 Requires a two dimensional geometric representation since
there are two linearly independent functions s1(t) and s0(t)

EE
Eavg  E
2
 Typically, the horizontal axis is taken as a reference for symbols
that are Inphase with the carrier cos wct, and the vertical axis
represents the Quadrature carrier component, sin wct

Error Probability of Binary Signals

 Unipolar Signaling (orthogonal)


s1 ( t )  A , 0  t T, for binary 1
s 0 (t )  0 , 0  t T, for binary 0
 Recall:
 a1  a 0 
PB  Q    equation B .18
 2 0 

 To minimize PB, we need to maximize:


a1  a0
0
or
(a1  a0 ) 2
 20

 We have
(a1  a0 ) 2 2 Ed

 20 N0
 We can write:

2
Ed   s1(t)  s0 (t) dt
T

0
2 2
  s1(t) dt   s0 (t) dt  2 s1(t)s0 (t)dt
T T T

0 0 0

 2Eb  2 s1(t)s0 (t)dt


T

 Ed   A 2T   Ed 
Pb  Q    Q   Q 
2N0   2N0   N0 
     
 Bipolar Signaling (antipodal)
s1 (t )  A, 0  t  T, for binary1
s0 (t )   A, 0  t  T , for binary0

 Ed   4 A2T   
Pb  Q   Q   Q 2Ed 
  2N0   N 
 2N0     0 
Unipolar ( orthogonal ) Bipolar ( antipodal )
 Eb   2EEb b 
Pb  Q  
 PbPbQQ
 N0   NN00 

 Bipolar signals require a


factor of 2 increase in energy
compared to Unipolar
 Since 10log102 = 3 dB, we
say that bipolar signaling
offers a 3 dB better
performance than Unipolar
Comparing BER Performance

For Eb / N 0  10 dB
PB ,orthogonal  9.2 x10  2
PB , antipodal  7.8 x10  4

 For the same received signal to noise ratio, antipodal provides lower
bit error rate than orthogonal
Baseband Communication System
 We have been considering the following baseband system

 The transmitted signal is created by the line coder according to



s (t )  a
n  
n g (t  nT b )
where an is the symbol mapping and g(t) is the pulse shape
Problems with Line Codes
 The big problem with the line codes is that they are not bandlimited
 The absolute bandwidth is infinite
 The power outside the 1st null bandwidth is not negligible
 That is, the power in the sidelobes can be quite high
 If the transmission channel is bandlimited, then high frequency
components will be cut off
 High frequency components correspond to sharp transition in

pulses
 Hence, the pulse will spread out

 If the pulse spreads out into the adjacent symbol period, then
intersymbol interference (ISI) has occurred
Intersymbol Interference (ISI)
 Intersymbol interference (ISI) occurs when a pulse spreads out in
such a way that it interferes with adjacent pulses at the sample
instant
 Causes

 Channel induced distortion which spreads or disperses the

pulses
 Multipath effects (echo)
 Due to improper filtering (@ Tx and/or Rx), the received pulses
overlap one another thus making detection difficult
 Example of ISI
 Assume polar NRZ line code
Inter Sybol Interference

 Input data stream and bit superposition

 The channel output is the sum of the contributions from each bit
Note:
 ISI can occur whenever a non-bandlimited line code is used over a
bandlimited channel
 ISI can occur only at the sampling instants

 Overlapping pulses will not cause ISI if they have zero amplitude at
the time the signal is sampled
ISI Baseband Communication System Model

where hT (t )  Impulse response of the transmitte r


hC (t )  Impulse response of the channel
hR (t )  Impulse response of the receiver

s (t )  a h
n  
n T (t  nT ), where Ts  n / Tb

r (t )  a
n  
n g T ( t  nT )  n ( t ), where g ( t )  hT ( t ) * hC ( t ), T s  1 / f s

y (t )   a h (t  nT )  n (t )
n  
n e e where he (t )  hT (t ) * hC (t ) * hR (t ),
n e ( t )  n ( t ) * hC ( t ) * h R ( t )
 Note that he(t) is the equivalent impulse response of the receiving
filter
 To recover the information sequence {an}, the output y(t) is sampled
at t = kT, k = 0, 1, 2, …
 Input data stream and bit superposition

 The sampled sequence is



y (kT )   a h (kT  nT )  n (kT )
n  
n e e

or equivalently  
yk  ah
n  
n k n  nk  h0 ak  a h
n   , n  k
n k n  nk

where hk  h0 (kT ), nk  n0 (kT ), k  0,1,2,..


 h0 is an arbitrary constant
Signal Design for Bandlimited Channel
Zero ISI

y ( kT )  h0 ak   a h (kT  nT )  n (kT )
n   , n  k
n e e

 To remove ISI, it is necessary and sufficient to make the


term

he (kT  nT)  0, for n  k and h0  0


 Pulse shape that satisfy this criteria is Sinc(.) function, e.g.,
t
he (t )  sin c   sin c(2Bt)
T 
 The smallest value of T for which transmission with zero ISI is
possible is 1
T
2B

 Problems with Sinc(.) function


 But there are problems with Sinc(.) pulse shape function

 It is not possible to create Sinc pulses due to


 Infinite time duration
 Sharp transition band in the frequency domain

 Sinc(.) pulse shape can cause ISI in the presence of timing errors
 If the received signal is not sampled at exactly the bit
instant, then ISI will occur
Raised Cosine Pulse
 The following pulse shape satisfies Nyquist’s method for zero ISI

 a t   a t   a t 
sin   cos   cos  
 T   T   sin c  t   T 
he (t )   
a t 4a t2 2
 T  1  4a t
2 2
1
T T2 T2
 The Fourier Transform of this pulse shape is
 1a
 T, 0 | f |
2T

  T  1a  1a 1 a
He ( f )   T / 21 cos | f |  , | f | 
  a 2T  2T 2T
 1a
0, | f |
2T
 where a is the roll-off factor that determines the bandwidth
Root RC rolloff Pulse Shaping
 We saw earlier that the noise is minimized at the receiver by using a
matched filter
 If the transmit filter is H(f), then the receive filter should be H*(f)

 The combination of transmit and receive filters must satisfy Nyquist’s


first method for zero ISI

He ( f )  H ( f )H * ( f )  H ( f )  He ( f )
 Transmit filter with the above response is called the root raised
cosine-rolloff filter

You might also like