Aecb14 LN
Aecb14 LN
ON
Ms.V.Bindusree
(Assistant Professor)
Representation of Fourier series, Continuous time periodic signals, Properties of Fourier Series, Dirichlet’s
conditions, Trigonometric Fourier Series and Exponential Fourier Series, Complex Fourier spectrum.
Fourier Transforms:
Deriving Fourier Transform from Fourier series, Fourier Transform of arbitrary signal, Fourier Transform of
standard signals, Fourier Transform of Periodic Signals, Properties of Fourier Transform, Fourier Transform
involving Impulse function and Signum function, Introduction to Hilbert Transforms
MODULE - III SIGNAL TRANSMISSION THROUGH LINEAR Classes: 12
SYSTEMS
Linear System, Impulse response, Response of a Linear System, Linear Time Invariant (LTI) System, Linear
Time Variant (LTV) System, Transfer function of a LTI system, Filter characteristics of Linear Systems,
Distortion less transmission through a system, Signal bandwidth, System bandwidth, Ideal LPF, HPF and BPF
characteristics.
Causality and Paley-Wiener criterion for physical realization, Relationship between Bandwidth and rise time, Convolution
and Correlation of Signals, Concept of convolution in Time domain and Frequency domain, Graphical representation of
Convolution.
MODULE - IV LAPLACE TRANSFORM AND Z-TRANSFORM Classes: 09
Laplace Transforms: Laplace Transforms (L.T), Inverse Laplace Transform, Concept of Region of
Convergence (ROC) for Laplace Transforms, Properties of L.T, Relation between L.T and F.T of a signal,
Laplace Transform of certain signals using waveform synthesis.
Z–Transforms: Concept of Z- Transform of a Discrete Sequence, Distinction between Laplace, Fourier and Z
Transforms, Region of Convergence in Z-Transform, Constraints on ROC for various classes of signals, Inverse
Z-transform, Properties of Z-transforms
MODULE - V SAMPLING THEOREM Classes: 06
Graphical and analytical proof for Band Limited Signals, Impulse Sampling, Natural and Flat top Sampling, Reconstruction
of signal from its samples, Effect of under sampling – Aliasing, Introduction to Band Pass Sampling. Correlation: Cross
Correlation and Auto Correlation of Functions, Properties of Correlation Functions, Energy Density Spectrum, Parseval’s
Theorem, Power Density Spectrum, Relation between Autocorrelation Function and Energy/Power Spectral Density
Function, Relation between Convolution and Correlation, Detection of Periodic Signals in the presence of Noise by
Correlation, Extraction of Signal from Noise by filtering
Text Books:
1. Signals, Systems & Communications, B.P. Lathi, BS Publications, 2009.
2. Signals and Systems, A.V. Oppenheim, A.S. Willsky and S.H. Nawab ,PHI, 2nd Edition 2009.
3. Digital Signal Processing, Principles, Algorithms, and Applications, John G. Proakis, Dimitris G. Manolakis,
Pearson Education / PHI. 2007.
Reference Books:
1. Signals & Systems, Simon Haykin and Van Veen, Wiley, 2nd Edition, 2009.
2. Signals and Signals, Iyer and K. Satya Prasad, Cengage Learning, 2 nd Edition, 2009.
3. Discrete Time Signal Processing, A. V. Oppenheim and R.W. Schaffer, PHI, 2009.
4. Fundamentals of Digital Signal Processing, Loney Ludeman. John Wiley, PHI, 2009.
COURSE OBJECTIVES
CO 1 Summarize the basic signals exponential, sinusoidal, impulse, unit step and signum for
performing mathematical operations on signals.
CO 2 Demonstrate the concepts of vector algebra for approximating a signal with the orthogonal
functions.
Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold face type and their magnitude
is denoted by light face type.
Example: V is a vector with magnitude V. Consider two vectors V1 and V2 as shown in the following diagram. Let
the component of V1 along with V2 is given by C12V2. The component of a vector V1 along with the vector V2 can
obtained by taking a perpendicular from the end of V1 to the vector V2 as shown in diagram:
V1= C12V2 + Ve
But this is not the only way of expressing vector V1 in terms of V2. The alternate possibilities are:
V1=C1V2+Ve1
V2=C2V2+Ve2
The error signal is minimum for large component value. If C12=0, then two signals are said to be orthogonal.
The concept of orthogonality can be applied to signals. Let us consider two signals f1(t) and f2(t).
Similar to vectors, you can approximate f1(t) in terms of f2(t) as f1(t) = C12 f2(t) + fe(t) for (t1 < t < t2)
⇒ fe(t) = f1(t) – C12 f2(t)
One possible way of minimizing the error is integrating over the interval t1 to t2.
However, this step also does not reduce the error to appreciable extent. This can be corrected by taking the square of
error function.
Where ε is the mean square value of error signal. The value of C12 which minimizes the error, you need to calculate
dε/dC12=0
Derivative of the terms which do not have C12 term are zero.
A complete set of orthogonal vectors is referred to as orthogonal vector space. Consider a three dimensional vector
space as shown below:
Consider a vector A at a point (X1, Y1, Z1). Consider three unit vectors (VX, VY, VZ) in the direction of X, Y, Z
axis respectively. Since these unit vectors are mutually orthogonal, it satisfies that
We can write above conditions as
The vector A can be represented in terms of its components and unit vectors as
Any vectors in this three dimensional space can be represented in terms of these three unit vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
As the magnitude of unit vectors is unity for any vector A The component of A along x axis = A.VX
The component of A along Y axis = A.VY The component of A along Z axis = A.VZ
=A.VG (3)
Substitute equation 2 in equation 3.
Let us consider a set of n mutually orthogonal functions x1(t), x2(t)... xn(t) over the interval t1 to t2. As these
functions are orthogonal to each other, any two signals xj(t), xk(t) have to satisfy the orthogonality condition. i.e.
Let a function f(t), it can be approximated with this orthogonal signal space by adding the components along
mutually orthogonal signals i.e.
The component which minimizes the mean square error can be found by
All terms that do not contain Ck is zero. i.e. in summation, r=k term remains and all other terms are zero.
Mean Square Error:
The average of square of error function fe(t) is called as mean square error. It is denoted by ε (epsilon).
If f1(t) and f2(t) are two complex functions, then f1(t) can be expressed in terms of f2(t) as
Where f2*(t) is the complex conjugate of f2(t) If f1(t) and f2(t) are orthogonal then C12 = 0
Ramp Signal
Parabolic Signal
Signum Function
sgn(t) = 2u(t) – 1
Exponential Signal
Rectangular Signal
Sinusoidal Signal
Sinusoidal signal is in the form of x(t) = A cos(w0±ϕ) or A sin(w0±ϕ)
Where T0 = 2π/w0
Classification of Signals:
A signal is said to be deterministic if there is no uncertainty with respect to its value at any instant of time. Or,
signals which can be defined exactly by a mathematical formula are known as deterministic signals.
A signal is said to be non-deterministic if there is uncertainty with respect to its value at some instant of time. Non-
deterministic signals are random in nature hence they are called random signals. Random signals cannot be described
by a mathematical equation. They are modelled in probabilistic terms.
Even and Odd Signals
Let x(t) = t2
∴ t2 is even function
Example 2: As shown in the following diagram, rectangle function x(t) = x(-t) so it is also even function.
Any function ƒ(t) can be expressed as the sum of its even function ƒe(t) and odd function ƒo(t). ƒ(t ) = ƒe(t ) + ƒ0(t )
where
A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N). Where
T = fundamental time period, 1/T = f = fundamental frequency.
The above signal will repeat for every time interval T0 hence it is periodic with period T0.
NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be neither energy nor power
signal.
A signal is said to be real when it satisfies the condition x(t) = x*(t) A signal is said to be odd when it satisfies the
condition x(t) = -x*(t) Example:
If x(t)= 3 then x*(t)=3*=3 here x(t) is a real signal.
If x(t)= 3j then x*(t)=3j* = -3j = -x(t) hence x(t) is a odd signal.
Note: For a real signal, imaginary part should be zero. Similarly for an imaginary signal, real part should be zero.
Basic operations on Signals:
1. Amplitude
2. Time
Amplitude Scaling
Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This can be best explained by
using the following example:
-3 < t < 3 amplitude of z(t) = x1(t) + x2(t) = 1 + 2 = 3 3 < t < 10 amplitude of z(t) = x1(t) + x2(t) = 0 + 2 = 2
Subtraction
-3 < t < 3 amplitude of z (t) = x1(t) - x2(t) = 1 - 2 = -1 3 < t < 10 amplitude of z (t) = x1(t) - x2(t) = 0 - 2 = -2
Multiplication
Multiplication of two signals is nothing but multiplication of their corresponding amplitudes. This can be best
explained by the following example:
Time Shifting
x(t ±t0) is time shifted version of the signal x(t). x (t + t0) →negative shift
x (t - t0) →positive shift
Time Scaling
x(At) is time scaled version of the signal x(t). where A is always positive.
|A| > 1 → Compression of the signal
|A| < 1 → Expansion of the signal
Note: u(at) = u(t) time scaling is not applicable for unit step function.
Time Reversal
Classification of Systems:
A system is said to be linear when it satisfies superposition and homogenate principles. Consider two systems with
inputs as x1(t), x2(t), and outputs as y1(t), y2(t) respectively. Then, according to the superposition and homogenate
principles,
Example:
Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear.
A system is said to be time variant if its input and output characteristics vary with time.
Otherwise, the system is considered as time invariant. The condition for time invariant system is:
y (n , t) = y(n-t)
The condition for time variant system is:
y (n , t) ≠ y(n-t
Where y (n , t) = T[x(n-t)] = input change
Example:
y(n) = x(-n)
Liner Time variant (LTV) and Liner Time Invariant (LTI) Systems
If a system is both liner and time variant, then it is called liner time variant (LTV) system.
If a system is both liner and time Invariant then that system is called liner time invariant (LTI) system.
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only dependent upon present input.
Hence the system is memory less or static.
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory to get this output. Hence, the
system is a dynamic system.
A system is said to be causal if its output depends upon present and past inputs, and does not depend upon future
input.
For non causal system, the output depends upon future inputs also.
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system is causal.
Example 2: y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the system output depends upon
future input. Hence the system is non-causal system.
A system is said to invertible if the input of the system appears at the output.
∴ Y(S) = X(S)
→ y(t) = x(t)
Hence, the system is invertible.
The system is said to be stable only when the output is bounded for bounded input. For a bounded input, if the
output is unbounded in the system then it is said to be unstable.
Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) = bounded output.
Signals are represented mathematically as functions of one or more independent variables. Here
we focus attention on signals involving a single independent variable. For convenience, this will
generally refer to the independent variable as time.
There are two types of signals: continuous-time signals and discrete-time signals.
Discrete -time signal: the variable of time is discrete. The weekly Dow Jones stock market index
is an example of discrete-time signal.
x(t) x[n]
x[0]
x[-1] x[1]
x[-2] x[2]
-5 -4 -3 3 4 5
n
t -2 -1 0 1 2
Fig. 1.1 Graphical representation of continuous- Fig. 1.2 Graphical representation of discrete-time
time signal. signal.
To distinguish between continuous-time and discrete-time signals we use symbol t to denote the
continuous variable and n to denote the discrete-time variable. And for continuous-time signals
we will enclose the independent variable in parentheses (), for discrete-time signals we will
enclose the independent variable in bracket [].
A discrete-time signal x[n] may represent a phenomenon for which the independent variable is
inherently discrete. A discrete-time signal x[n] may represent successive samples of an
underlying phenomenon for which the independent variable is continuous. For example, the
processing of speech on a digital computer requires the use of a discrete time sequence
representing the values of the continuous-time speech signal at discrete points of time.
1.1.2 Signal Energy and Power
If v(t) and i(t) are respectively the voltage and current across a resistor with resistance R , then
the instantaneous power is
1
p(t) v(t)i(t) v 2 (t) . (1.1)
R
1
p(t)dt
t2 t2
v 2 (t)dt , (1.2)
t1 t1 R
For any continuous-time signal x(t) or any discrete-time signal x[n], the total energy over the
time interval t1 t t2 in a continuous-time signal x(t) is defined as
t2 2
t1
x(t) dt , (1.4)
where x denotes the magnitude of the (possibly complex) number x . The time-averaged power
1 t2 2
is
t2 t1 t1
x(t) dt . Similarly the total energy in a discrete-time signal x[n] over the time
interval n1 n n2 is defined as
n2
x[n]
2
(1.5)
n1
n2
1 2
n2 n1 1
The average power is x[n]
n1
In many systems, we will be interested in examining the power and energy in signals over an
infinite time interval, that is, for t or n . The total energy in continuous
time is then defined
T
2
2
E lim x(t) dt x(t) dt , (1.6)
T T
and in discrete time
N
E x[n] x[n] .
2
lim
2
(1.7)
N
N
For some signals, the integral in Eq. (1.6) or sum in Eq. (1.7) might not converge, that is, if x(t)
or x[n] equals a nonzero constant value for all time. Such signals have infinite energy, while
signals with E have finite energy.
N 2
P lim 1
N 2N 1
x[n] (1.9)
N
E
P lim 0 (1.10)
T 2T
Class 2: with finite average power P. If P 0 , then E . An example is the signal
x[n] 4 , it has infinite energy, but has an average power of P =16. (Power Signal)
Class 3: signals for which neither P and E are finite. An example of this signal is x(t) t .
n n
n0
(a) (b)
t
t0
t
x[n] x[-n]
n n
(a) (b)
Fig. 1.5 (a) A discrete-time signal x[n]; (b) its reflection, x[n] about n 0 .
x(t) x(-t)
t t
0 0
(a) (b)
Fig. 1.6 (a) A continuous-time signal x(t) ; (b) its reflection, x(t) about t 0 .
x(t)
x(2t)
t
t 0
(a) (b)
x(t/2)
t
0
(c)
A periodic continuous-time signal x(t) has the property that there is a positive value of T for
which
From Eq. (1.11), we can deduce that if x(t) is periodic with period T, then x(t) x(t mT ) for
all t and for all integers m . Thus, x(t) is also periodic with period 2T, 3T, …. The fundamental
period T0 of x(t) is the smallest positive value of T for which Eq. (1.11) holds.
x(t)
...... ......
for all values of n. If Eq. (1.12) holds, then x[n] is also periodic with period 2N , 3N , …. The
fundamental period N0 is the smallest positive value of N for which Eq. (1.12) holds.
x[n]
...... ......
n
In addition to their use in representing physical phenomena such as the time shift in a radar
signal and the reversal of an audio tape, transformations of the independent variable are
extremely useful in examining some of the important properties that signal may possess.
Signal with these properties can be even or odd signal, periodic signal:
An important fact is that any signal can be decomposed into a sum of two signals, one of which
is even and one of which is odd.
x(t)
x(t)
t
0
t
0
(a) (b)
which is referred to as the even part of x(t) . Similarly, the odd part of x(t) is given by
ODx(t)
1
x(t) x(t) (1.14)
2
1, n 0 1
x[n] x[n] x[n] n 0
0, n 0 ,
2
EV x[ n] 1, n 0
1
1 1 , n 0
2
1
2
n n
(a) (b)
1
x[n] , n0
2
ODx[n] 0, n 0
1
, n 0
2
1
2
n
1
2
(c)
x(t) x(t)
C
C
t t
(a) (b)
Fig. 1.12 The continuous-time complex exponential signal x(t) Ce , (a) a 0 ; (b) a 0 .
at
An important property of this signal is that it is periodic. We know x(t) is periodic with period
T if
e j0T 1 (1.18)
2
T0 (1.19)
0
Thus, the signals e j0 t and e j0 t have the same fundamental period.
A signal closely related to the periodic complex exponential is the sinusoidal signal
With seconds as the unit of t, the units of and 0 are radians and radians per second. It is also
known 0 2f 0 , where f0 has the unit of circles per second or Hz.
The sinusoidal signal is also a periodic signal with a fundamental period of T0 .
x( t) A cos( 0t )
2
T 0
0
A cos
Using Euler’s relation, a complex exponential can be expressed in terms of sinusoidal signals
with the same fundamental period:
Similarly, a sinusoidal signal can also be expressed in terms of periodic complex exponentials
with the same fundamental period:
and
Asin( 0t ) A Im e j(0 t ) (1.24)
Periodic signals, such as the sinusoidal signals provide important examples of signal with infinite
total energy, but finite average power. For example:
T0 T0
e j0 t dt 1dt T0
Eperiod
0 0
(1.25)
1 T0 T0
e j0t dt
Pperiod
T0
0 1dt 1
0
(1.26)
Since there are an infinite number of periods as t ranges from to , the total energy
integrated over all time is infinite. The average power is finite since
2
1 T
P lim e j t dt 1 (1.27)
T 2T
T
0
Example:
Signal x(t) e j 2t e j 3t can be expressed as x(t) e j 2.5t (e j 0.5t e j0.5t ) 2e j 2.5t cos(0.5t) , the
magnitude of x(t) is x(t) 2 cos(0.5t) , which is commonly referred to as a full-wave rectified
sinusoid, shown in Fig. 1.14.
x (t)
t
4 2 0 2 4
Thus, for r 0 , the real and imaginary parts of a complex exponential are sinusoidal.
For r 0 , sinusoidal signals multiplied by a growing exponential.
For r 0 , sinusoidal signals multiplied by a decaying exponential.
Damped signal – Sinusoidal signals multiplied by decaying exponentials are commonly refereed
to as damped signal.
x(t)
x(t)
t t
(a) (b)
Fig. 1.15 (a) Growing sinusoidal signal; (b) decaying sinusoidal signal.
x[n] C n , (1.30)
where C and are in general complex numbers. This can be alternatively expressed
x[n] Ce n , (1.31)
where e .
x[n] x[n]
n n
(a) (b)
x[n] x[n]
n n
(c) (d)
Fig. 1.16 Real Exponential Signal x[n] C n : (a) >1; (b) 0< <1; (c) –1< <0; (d) <-1.
Sinusoidal Signals
Similarly, a sinusoidal signal can also be expresses in terms of periodic complex exponentials
with the same fundamental period:
and
Asin( 0n ) AIm e j (0 n )
(1.36)
The above signals are examples of discrete signals with infinite total energy, but finite average
power. For example: every sample of x[n] e j0 n contributes 1 to the signal’s energy. Thus the
total energy n is infinite, while the average power is equal to 1.
Fig.1.17 Discrete-time sinusoidal signal.
Thus, for 1, the real and imaginary parts of a complex exponential are sinusoidal.
For 1, sinusoidal signals multiplied by a decaying exponential.
For 1, sinusoidal signals multiplied by a growing exponential.
(a) (b)
Fig. 1.18 (a) Growing sinusoidal signal; (b) decaying sinusoidal signal.
The discrete-time signal x[n] e j0 n does not have a continuously increasing rate of oscillation
as 0 is increased in magnitude, but as 0 is increased from 0, the signal oscillates more and
more rapidly until 0 reaches , and when 0 is continuously increased, the rate of oscillation
decreases until 0 reaches 2 . We conclude that the low-frequency discrete-time exponentials
have values of 0 near 0, 2 , and any other even multiple of , while the high-frequencies are
located near 0 and other odd multiples of .
In order for the signal x[n] e j0 n to be periodic with period N 0 , we must have
or equivalently
e j0 N 1. (1.40)
For Eq. (1.40) to hold, 0 N must be a multiple of 2 . That is, there must be an integer m such
that
0 N 2m , (1.41)
or equivalently
0 m
. (1.42)
2 N
From Eq. (1.40), x[n] e j0 n is a periodic if 0 / 2 is a rational number and is not periodic
otherwise.
2 0
, (1.43)
N m
The comparison of the continuous-time and discrete-time signals are summarized in the table
below:
Table 1 Comparison of the signals e j0 t and e j0n .
e j 0 t e j 0 n
Distinct signals for distinct values of 0 Identical signals for values of 0 separated
by multiples of 2
Periodic for any choice of 0 Periodic only if 0 2m / N for some
integers N 0 and m .
Fundamental frequency 0 Fundamental frequency 0 / m
Fundamental period Fundamental period
0 0 : undefined 0 0 : undefined
0 : 2
0 0 : m 2
0
0
0
Example : Suppose that we wish to determine the fundamental period of the discrete-time signal
x[n] e j ( 2 / 3) n e j( 3 / 4 )n (1.45)
Solution:
The first exponential on the right hand side has a fundamental period of 3. The second
exponential has a fundamental period of 8.
For the entire signal to repeat, each of the terms in Eq. (1.45) must go through an integer number
of its own fundamental period. The smallest increment of n the accomplished this is 24. That is,
over an interval of 24 points, the first term will have gone through 8 of its fundamental periods,
and the second term through three of its fundamental periods, and the overall signal through
exactly one of its fundamental periods.
There are only N distinct period exponentials in the set given in Eq. (1.46).
1.4 The Unit Impulse and Unit Step Functions
The unit impulse and unit step functions in continuous and discrete time are considerably
important in signal and system analysis.
0, n0
[n] , (1.48)
1, n0
[n]
0, n0
u[n] , (1.49)
1, n0
u [ n]
1
n
0
The discrete-time impulse unit is the first difference of the discrete-time step
The discrete-time unit step is the running sum of the unit sample:
n
u[n] [m] ,
m
(1.51)
It can be seen that for n 0 , the running sum is zero, and for n 0 , the running sum is 1.
The unit impulse sequence can be used to sample the value of a signal at n 0 . Since [n] is
nonzero only for n 0 , it follows that
0, t0
u(t) , (1.54)
1, t0
u (t)
1
t
0
The continuous-time unit step is the running integral of the unit impulse
The continuous-time unit impulse can also be considered as the first derivative of the continuous-
time unit step,
du(t)
(t) . (1.56)
dt
Since u(t) is discontinuous at t 0 and consequently is formally not differentiable. This can be
interpreted, however, by considering an approximation to the unit step u (t) , as illustrated in the
figure below, which rises from the value of 0 to the value 1 in a short time interval of length .
u (t )
(t )
1
1
t t
0 0
(a) (b)
Fig. 1.22 (a) Continuous approximation to the unit step u (t) ; (b) Derivative of u (t) .
The derivative is
du (t)
(t) , (1.57)
dt
1
, 0 t
(t) , (1.58)
0, otherwise
Note that (t) is a short pulse, of duration and with unit area for any value of . As 0,
(t) becomes narrower and higher, maintaining its unit area. At the limit,
and
du(t)
(t) . (1.61)
dt
Graphically, (t) is represented by an arrow pointing to infinity at t 0 , “1” next to the arrow
represents the area of the impulse.
(t) k (t )
1 k
t t
0 0
Or more generally,
Example:
2
x (t )
2 t
0
1 -1
t -2
0
-1 -3
A system can be viewed as a process in which input signals are transformed by the system or
cause the system to respond in some way, resulting in other signals as outputs.
Examples
+
+
vs (t ) - C v0 (t )
i(t )
-
(a)
f (t )
(a)
Fig. 1. 25 Examples of systems. (a) A system with input voltage vs (t) and output voltage v0 (t) .
(b) A system with input equal to the force f (t) and output equal to the velocity v(t) .
A continuous-time system is a system in which continuous-time input signals are applied and
results in continuous-time output signals.
Continuous-time
x(t ) y (t )
system
A discrete-time system is a system in which discrete-time input signals are applied and results in
discrete-time output signals.
Discrete-time
x[n ] y[n]
system
1.5.2 Simple Examples of Systems
The current i(t) is proportional to the voltage drop across the resistor:
v (t) vC (t)
i(t) s . (1.64)
R
dvC (t)
i(t) C . (1.65)
dt
Equating the right-hand sides of Eqs. 1.64 and 1.65, we obtain a differential equation describing
the relationship between the input and output:
Example 2: Consider the system in Fig. 25 (b), where the force f (t) as the input and the velocity
v(t) as the output. If we let m denote the mass of the car and v the resistance due to friction.
Equating the acceleration with the net force divided by mass, we obtain
Eqs.1.66 and 1.77 are two examples of first-order linear differential equations of the form:
dy(t)
ay(t) bx(t). (1.66)
dt
Example 3: Consider a simple model for the balance in a bank account from month to month.
Let y[n] denote the balance at the end of nth month, and suppose that y[n] evolves from month
to month according the equation:
or
where x[n] is the net deposit (deposits minus withdraws) during the nth month 1.01y[n 1]
models the fact that we accrue 1% interest each month.
Example 4: Consider a simple digital simulation of the differential equation in Eq. (1.67), in
which we resolve time into discrete intervals of length and approximate dv(t) / d (t) at t n
by the first backward difference, i.e.,
Let v[n] v(n) and f [n] f (n) , we obtain the following discrete-time model relating the
sampled signals v[n] and f [n],
m ∆
v[n] v[n 1] f [n] . (1.69)
(m ) (m )
Comparing Eqs. 1.68 and 1.69, we see that they are two examples of the first-order linear
difference equation, that is,
Some conclusions:
(a)
System1
Input + Output
System 2
(b)
System1 System 2
Input + Output
System 3
(c)
Fig. 1.26 Interconnection of systems. (a) A series or cascade interconnection of two systems; (b)
A parallel interconnection of two systems; (c) Combination of both series and parallel systems.
System 2
Rs
+
Vi A Vo
-
Vs ±
RL
Vf •
R2
R1
(a)
+ vi vs v f BASIC
vs v L A vi
+ AMPLIFIER
A
-
Feedback
Signal FEEDBACK vL
NETWORK
v f v L
FB
(b)
Fig. 1.28 A feedback electrical amplifier.
A system is memoryless if its output for each value of the independent variable as a given time is
dependent only on the input at the same time. For example:
is memoryless.
A resistor is a memoryless system, since the input current and output voltage has the relationship:
i(t )
+
v(t) Ri(t) , (1.72)
v(t )
where R is the resistance. -
One particularly simple memoryless system is the identity system, whose output is identical to its
input, that is
y[n] Inverse
x[n] System w[n]=x[n]
system
y(t)
x(t) y(t)=2x(t) w(t)=0.5y(t) w(t)=x(t)
n
y(t)
x[n] y[n] x[k ] w[n] y[n ] y[ n 1] w[ n] x[n ]
k
y[n] 0 ,
the system produces zero output sequence for any input sequence.
y(t) x 2 (t) ,
in which case, one cannot determine the sign of the input from the knowledge of the output.
Encoder in communication systems is an example of invertible system, that is, the input to the
encoder must be exactly recoverable from the output.
1.6.3 Causality
A system is causal if the output at any time depends only on the values of the input at present
time and in the past. Such a system is often referred to as being nonanticipative, as the system
output does not anticipate future values of the input.
The RC circuit in Fig. 25 (a) is causal, since the capacitor voltage responds only to the present
and past values of the source voltage. The motion of a car is causal, since it does not anticipate
future actions of the driver.
The following expressions describing systems that are not causal:
and
All memoryless systems are causal, since the output responds only to the current value of input.
Solution: System (1) is not causal, since when n 0 , e.g. n 4 , we see that y[4] x[4], so
that the output at this time depends on a future value of input.
System (2) is causal. The output at any time equals the input at the same time multiplied by a
number that varies with time.
1.6.4 Stability
A stable system is one in which small inp uts leads to responses that do not diverge. More
formally, if the input to a stable system is bounded, then the output must be also bounded and
therefore cannot diverge.
+
+
vs (t ) C v0 ( t)
-
i(t) - f (t )
(a) (b)
n
The accumulator y[n] x[k ] is not stable, since the sum grows continuously even if x[n] is
k
bounded.
Check the stability of the two systems:
S1 is not stable, since a constant input x(t) 1 , yields y(t) t , which is not bounded – no
matter what finite constant we pick, y(t) will exceed the constant for some t.
S2 is stable. Assume the input is bounded x(t) B , or B x(t) B for all t. We then see
that y(t) is bounded e B y(t) e B .
A system is time invariant if a time shift in the input signal results in an identical time shift in
the output signal. Mathematically, if the system output is y(t) when the input is x(t) , a time-
invariant system will have an output of y(t t0 ) when input is x(t t0 ) .
Examples:
The system y[n] nx[n] is not time invariant. This can be demonstrated by using
counterexample. Consider the input signal x1 [n] [n] , which yields y1[n] 0 . However,
the input x2 [n] [n 1] yields the output y 2 [n] n [n 1] [n 1] . Thus, while x2 [n] is
the shifted version of x1 [n] , y2 [n] is not the shifted version of y1[n] .
The system y(t) x(2t ) is not time invariant. To check using counterexample. Consider
x1 (t) shown in Fig. 1.30 (a), the resulting output y1 (t) is depicted in Fig. 1.30 (b). If the
input is shifted by 2, that is, consider x 2 (t) x1 (t 2) , as shown in Fig. 1.30 (c), we obtain
the resulting output y2 (t) x2 (2t ) shown in Fig. 1.30 (d). It is clearly seen that
y2 (t) y1 (t 2) , so the system is not time invariant.
x1 (t ) y1 (t ) x 2 (t ) x1 (t 2)
1 1 1
-2 2 -1 1 0 4
(a) (b) (c)
y2 (t ) y 2 (t 2 )
1 1
0 2 1 3
(d) (e)
1.6.6 Linearity
The two properties defining a linear system can be combined into a single statement:
Continuous time: ax1 (t) bx2 (t) ay1 (t) by2 (t) ,
Discrete time: ax1 [n] bx2 [n] ay1 [n] by2 [n].
Superposition property: If xk [n], k 1, 2, 3, ... are a set of inputs with corresponding outputs
yk [n], k 1, 2, 3, ... , then the response to a linear combination of these inputs given by
is
y[n] ak yk [n] a1 y1[n] a2 y2 [n] a3 y3 [n] ... , (1.80)
k
which holds for linear systems in both continuous and discrete time.
Examples:
x[n] 2x[n] ,
y0 [n] 3
y 0 (t )
Fig. 1.31 Structure of an incrementally linear system. y0 (t) is the zero-input response of the
system.
The system represented in Fig. 1.31 is called incrementally linear system. The system responds
linearly to the changes in the input.
The overall system output consists of the superposition of the response of a linear system with a
zero-input respons
After successful completion of the course, students will be able to:
CO No. Course Outcomes Knowledge
Level
(Bloom’s
Taxonomy)
CO 3 Illustrate Fourier series and Fourier transforms for calculating spectral Apply
characteristics of periodic and aperiodic signals.
CO 4 Make use of Fourier transform and its properties for determine the Apply
frequency response of the systems.
Program
Course Program Outcomes Specific
Outcomes Outcomes
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO 3 - √ - - - - - - - - - - - - -
CO 4 - √ √ - - - - - - - - - √ - -
MODULE – II
FOURIER SERIES
Representation of Fourier series, Continuous time periodic signals, Properties of Fourier Series,
Dirichlet‟s conditions, Trigonometric Fourier Series and Exponential Fourier Series, Complex
Fourier spectrum.
Fourier Transforms: Deriving Fourier Transform from Fourier series, Fourier Transform of
arbitrary signal, Fourier Transform of standard signals, Fourier Transform of Periodic Signals,
Properties of Fourier Transform, Fourier Transforms involving Impulse function and Signum
function, Introduction to Hilbert Transforms.
3.0 Introduction
By 1807, Fourier had completed a work that series of harmonically related sinusoids were
useful in representing temperature distribution of a body. He claimed that any periodic signal
could be represented by such series – Fourier Series. He also obtained a representation for
aperidic signals as weighted integrals of sinusoids – Fourier Transform.
The set of basic signals can be used to construct a broad and useful class of signals.
The response of an LTI system to each signal should be simple enough in structure to provide
us with a convenient representation for the response of the system to any signal constructed
as a linear combination of the basic signal.
The importance of complex exponentials in the study of LTI system is that the response of an
LTI system to a complex exponential input is the same complex exponential with only a change
in amplitude; that is
where the complex amplitude factor H (s) or H (z) will be in general be a function of the
complex variable s or z.
A signal for which the system output is a (possible complex) constant times the input is referred
to as an eigenfunction of the system, and the amplitude factor is referred to as the system’s
eigenvalue. Complex exponentials are eigenfunctions.
For an input x(t) applied to an LTI system with impulse response of h(t) , the output is
d
y(t) h( )x(t )d
s( t )
h()e
, (3.3)
h( )e s (t ) d e st h( )e s d
s
where we assume that the integral
h()e
d converges and is expressed as
It is shown the complex exponentials are eigenfunctions of LTI systems and H (s) for a
specific value of s is then the eigenvalues associated with the eigenfunctions.
Complex exponential sequences are eigenfunctions of discrete-time LTI systems. That is,
suppose that an LTI system with impulse response h[n] has as its input sequence
x[n] z n , (3.6)
where z is a complex number. Then the output of the system can be determined from the
convolution sum as
y[n] h[k ]x[n k ] h[k ]z n k
z n
h[k ]z k
. (3.7)
k k k
Assuming that the summation on the right-hand side of Eq. (3.7) converges, the output is the
same complex exponential multiplied by a consta nt that depends on the value of z . That is,
It is shown the complex exponentials are eigenfunctions of LTI systems and H (z) for a
specific value of z is then the eigenvalues associated with the eigenfunctions z n .
The example here shows the usefulness of decomposing general signals in terms of
eigenfunctions for LTI system analysis:
a e s1t a H (s )e s1t
1 11 1
and from the superposition property the response to the sum is the sum of the responses,
x[n] ak zk ,
n
(3.14)
k
the output is
y[n] ak H (zk ) zk ,
n
(3.15)
k
Both these signals are periodic with fundamental frequency 0 and fundamental period
T 2 / 0 . Associated with the signal in Eq. (3.18) is the set of harmonically related complex
exponentials
k(t) e jk t e jk ( 2 / T )t ,
0
k 0, 1, 2, ...... (3.19)
Each of these signals is periodic with period of T (although for k 2 , the fundamental period of
k (t) is a fraction of T ). Thus, a linear combination of harmonically related complex
exponentials of the form
a a
jk ( 2 /T ) t
x(t) k e
jk0 t
k e , (3.20)
k k
k 0 , x(t) is a constant.
k 1 and k 1 , both have fundamental frequency equal to 0 and are collectively
referred to as the fundamental components or the first harmonic components.
k 2 and k 2 , the components are referred to as the second harmonic components.
k N and k N , the components are referred to as the Nth harmonic components.
x(t) x *(t) a *
k
k e jk0t , (3.21)
x(t) a *
k
k
e jk0t , (3.22)
a *k ak . (3.23)
To derive the alternative forms of the Fourier series, we rewrite the summation in Eq. (2.20) as
0 ae k a *k e . (3.25)
k 1
Since the two terms inside the summation are complex conjugate of each other, this can be
expressed as
x(t) a0
2 Re ak e jk0t . (3.26)
k 1
If ak is expressed in polar from as
a k A ek jk ,
x(t) a0
2Re Ak e j( k0t k ) .
k 1
That is
It is one commonly encountered form for the Fourier series of real periodic signals in continuous
time.
ak Bk jCk
x(t) a 0 2 Bk cos k 0 t C k sin k 0 t . (3.28)
k 1
For real periodic functions, the Fourier series in terms of complex exponential has the following
three equivalent forms:
x(t) a0 2 Ak cos(k 0t k )
k 1
Multiply both side of x(t) a e by e jn0 t , we obtain
jk 0t
k
k
x(t)e jn0 t a jk0 t jn0 t
k e e , (3.29)
k
Integrating both sides from 0 to T 2 / 0 , we have
T
jn t jn t j ( kn ) t
T jk t T
x(t)e 0
dt a k e 0 e 0
dt a k e 0
dt , (3.30)
0
k
0 k 0
Note that
T , kn
T
e j( k n )0 t dt
0
0, kn
x(t) a e
k
k
jk 0t
a k e jk (2 / T )t
k
(3.32)
1 jk 0 t 1
ak dt
jk (2 / T )t
x(t)e x(t)e dt (3.33)
T T T T
Eq. (3.32) is referred to as the Synthesis equation, and Eq. (3.33) is referred to as analysis
equation. The set of coefficient ak are often called the Fourier series coefficients of the
spectral coefficients of x(t) .
1 j0t 1 j t
sin 0 t 2 e e 0 .
2j
Comparing the right-hand sides of this equation and Eq. (3.32), we have
1 1
a1 , a1
2j 2j
ak 0 , k 1 or 1
Example : The periodic square wave, sketched in the figure below and define over one period is
1, t T1
x(t) , (3.35)
T1 t T / 2
0,
x (t )
2T T T T T T 2T
1 1
2 2
To determine the Fourier series coefficients for x(t) , we use Eq. (3.33). Because of the
symmetry of x(t) about t 0 , we choose T / 2 t T / 2 as the interval over which the
integration is performed, although any other interval of length T is valid the thus lead to the same
result.
For k 0 ,
1 T1 1 T1 2T1
a x(t)dt dt
T T
, (3.36)
0
T 1 T 1 T
For k 0 , we obtain
T
1
a 1
1
e
T1
jk0t
T
dt e jk0 t
jk 0T
k
T 1
T1
The above figure is a bar graph of the Fourier series coefficients for a fixed T 1 and several
values of T . For this example, the coefficients are real, so they can be depicted with a single
graph. For complex coefficients, two graphs corresponding to the real and imaginary parts or
amplitude and phase of each coefficient, would be required.
N
xN (t) a e
k N
k
jk0t
. (3.38)
Let eN (t) denote the approximation error,
N
eN (t) x(t) xN (t) x(t) a k e jk0t . (3.39)
k N
The criterion used to measure quantitatively the approximation error is the energy in the error
over one period:
2
E N e N (t) dt . (3.40)
T
It is shown (problem 3.66) that the particular choice for the coefficients that minimize the energy
in the error is
1
a
T
jk 0t
k x(t )e
T
dt . (3.41)
It can be seen that Eq. (3.41) is identical to the expression used to determine the Fourier series
coefficients. Thus, if x(t) has a Fourier series representation, the best approximation using only
a finite number of harmonically related complex exponentials is obtained by truncating the
Fourier series to the desired number of terms.
One class of periodic signals that are representable through Fourier series is those signals which
have finite energy over a period,
2
x(t ) dt ,
T
(3.42)
When this condition is satisfied, we can guarantee that the coefficients obtained from Eq. (3.33)
are finite. We define
e(t) x(t) a
k
k
e jk0 t , (3.43)
then
2
e(t) dt 0 , (3.44)
T
The convergence guaranteed when x(t) has finite energy over a period is very useful. In this
case, we may say that x(t) and its Fourier series representation are indistinguishable.
Alternative set of conditions developed by Dirichlet that guarantees the equivalence of the signal
and its Fourier series representation:
T
x(t) dt , (3.45)
1 1
a x(t)e jk0 t dt
k
T
T
TT
x(t) dt . (3.46)
1
x(t) , 0 t 1.
t
Condition 2: In any finite interval of time, x(t) is of bounded variation; that is, there are no
more than a finite number of maxima and minima during a single period of the signal.
Condition 3: In any finite interval of time, there are only a finite number of discontinuities.
Furthermore, each of these discontinuities is finite.
Summary:
For a periodic signal that has no discontinuities, the Fourier series representation converges and
equals to the original signal at all the values of t .
For a periodic signal with a finite number of discontinuities in each period, the Fourier series
representation equals to the original signal at all the values of t except the isolated points of
discontinuity.
Gibbs Phenomenon:
Near a point, where x(t) has a jump discontinuity, the partial sums xN (t) of a Fourier series
exhibit a substantial overshoot near these endpoints, and an increase in N will not diminish the amplitude
of the overshoot, although with increasing N the overshoot occurs over smaller and smaller intervals. This
phenomenon is called Gibbs phenomenon.
A large enough value of N should be chosen so as to guarantee that the total energy in these
ripples is insignificant.
Notation: suppose x(t) is a periodic signal with period T and fundamental frequency 0 . Then if
the Fourier series coefficients of x(t) are denoted by ak , we use the notation
x(t)
FS
ak ,
to signify the pairing of a periodic signal with its Fourier series coefficients.
Linearity
Let x(t) and y(t) denote two periodic signals with period T and which have Fourier series
coefficients denoted by ak and bk , that is
x(t)
FS
a k and y(t)
FS
bk ,
then we have
When a time shift to a periodic signal x(t) , the period T of the signal is preserved.
If x(t)
FS
a k , then we have
x(t t 0 )
FS
e jk 0 t a k . (3.49)
If x(t)
FS
a k , then
x(t)
FS
ak . (3.50)
Time reversal applied to a continuous-time signal results in a time reversal of the corresponding
sequence of Fourier series coefficients.
If x(t) is even, that is x(t) x(t) , the Fourier series coefficients are also even, ak ak .
Similarly, if x(t) is odd, that is x(t) x(t) , the Fourier series coefficients are also odd,
ak ak .
If x(t) has the Fourier series representation x(t) a e k
jk0t
, then the Fourier series
x(t) a k e jk (0 ) t . (3.51)
k
The Fourier series coefficients have not changes, the Fourier series representation has changed
because of the change in the fundamental frequency.
3.5.5 Multiplication
Suppose x(t) and y(t) are two periodic signals with period T and that
x(t)
FS
ak ,
y(t)
FS
bk .
Since the product x(t) y(t) is also periodic with period T, its Fourier series coefficients hk is
x(t) y(t)
FS
h
k
a b
l k l
. (3.52)
l
The sum on the right-hand side of Eq. (3.52) may be interpreted as the discrete-time convolution
of the sequence representing the Fourier coefficients of x(t) and the sequence representing the
Fourier coefficients of y(t) .
Taking the complex conjugate of a periodic signal x(t) has the effect of complex conjugation
and time reversal on the corresponding Fourier series coefficients. That is, if
x(t)
FS
ak ,
then
x * (t)
FS
a * k . (3.53)
If x(t) is real, that is, x(t) x *(t) , the Fourier series coefficients will be conjugate symmetric,
that is
ak a *k . (3.54)
From this expression, we may get various symmetry properties for the magnitude, phase, real
parts and imaginary parts of the Fourier series coefficients of real signals. For example:
From Eq. (3.54), we see that if x(t) is real, a0 is real and ak ak .
If x(t) is real and even, we have ak a k , from Eq. (3.54) ak a *k , so ak a *k the
Fourier series coefficients are real and even.
If x(t) is real and odd, the Fourier series coefficients are real and odd.
1
x(t) dt ak 2 ,
2
TT
k
(3.55)
Since
1 1
ae jk0t 2 2
dt a dt a ,
2
T T
k
T
T
k k
2
so that a k is the average power in the kth harmonic component.
Thus, Parseval’s Relation states that the total average power in a periodic signal equals the sum
of the average powers in all of its harmonic components.
3.5.8 Summary of Properties of the Continuous-Time Fourier Series
Imak Im a k
a k ak
ak ak
Real and Even Signals x(t) real and even ak real and even
Real and Odd Signals x(t) real and odd ak purely imaginary and
odd
Even-Odd Decomposition
xe (t) Evx(t)
x(t) real
of Real Signals
x (t) Od x(t) x(t) real Rea k
e
j Ima k
Parseval’s Relation for Periodic Signals
1 2 2
x(t) dt ak
T T k
Example : Consider the signal g(t) with a fundamental period of 4.
g (t )
1/2
t
2 1 1 2
1/ 2
The Fourier series representation can be obtained directly using the analysis equation (3.33). We
may also use the relation of g(t) to the symmetric periodic square wave x(t) discussed on page
8. Referring to that example, T 4 and T1 1 ,
The time-shift property indicates that if the Fourier series coefficients of x(t) are denoted by a k
the Fourier series coefficients of x(t 1) can be expressed as
bk a ke jk / 2 . (3.57)
The Fourier coefficients of the dc offset in g(t), that is the term –1/2 on the right-hand side of
Eq. (3.56) are given by
0, for k 0
ck
1
, for k 0 . (3.58)
2
Applying the linearity property, we conclude that the coefficients for g(t) can be expressed as
a e jk / 2 , for k 0
k
d k 1 , (3.59)
, for k 0
a 0
2
sin(k / 2)
replacing a e jk / 2 , then we have
k
k
sin(k / 2) e jk / 2 , for k 0
d k k . (3.60)
0, for k 0
Example : The triangular wave signal x(t) with period T 4 , and fundamental frequency
0 / 2 is shown in the figure below.
x (t )
t
2 2
The derivative of this function is the signal g(t) in the previous preceding example. Denoting
the Fourier series coefficients of g(t) by dk , and those of x(t) by ek , based on the
differentiation property, we have
This equation can be expressed in terms of ek except when k 0 . From Eq. (3.60),
2d k 2 sin(k / 2) jk / 2
ek e . (3.62)
jk jk
2
For k 0 , e0 can be simply calculated by calculating the area of the signal under one period and
divide by the length of the period, that is
e0 1 / 2 . (3.63)
Example: The properties of the Fourier series representation of periodic train of impulse,
x(t) (t kT ) .
k
(3.64)
We use Eq. (3.33) and select the integration interval to be T / 2 t T / 2 , avoiding the
placement of impulses at the integration limits.
1 T/2 1
a (t)e jk ( 2 / T )t dt . (3.65)
k
T T / 2 T
All the Fourier series coefficients of this periodic train of impulse are identical, real and even.
The periodic train of impulse has a straightforward relation to square-wave signals such as g(t)
on page 8. The derivative of g(t) is the signal q(t) shown in the figure below,
x(t )
2T T T 2T
g (t )
2T T T T1 T1 T T 2T
2 2
q(t )
T1
T T1 T
2
2
which can also interpreted as the difference of two shifted versions of the impulse train x(t) .
That is,
Based on the time -shifting and linearity properties, we may express the Fourier coefficients bk of
q(t) in terms of the Fourier series coefficient of ak ; that is
b k e jk 0T1a
k
1
e jk T a0 1 k e jk T 01
e jk 0T1 , (3.67)
T
bk jk0 ck , (3.68)
2T1
c . (3.70)
0
T
Example: Suppose we are given the following facts about a signal x(t)
Show that the information is sufficient to determine the signal x(t) to within a sign factor.
According to Fact 3, x(t) has at most three nonzero Fourier series coefficients ak : a1 , a0
and a1 . Since the fundamental frequency 0 2 / T 2 / 4 / 2 , it follows that
Since x(t) is real (Fact 1), based on the symmetry property a0 is real and a1 a *1 .
Consequently,
x(t) a a e
0
jt / 2
1
1
a e jt / 2 * a 2 Re a e jt / 2 .
0
1
(3.72)
Based on the Fact 4 and considering the time -reversal property, we note that ak corresponds
to x(t) . Also the multiplication property indicates that multiplication of kth Fourier series
by e jk / 2 corresponds to the signal being shifted by 1 to the right. We conclude that the
coefficients bk correspond to the signal x((t 1)) x(t 1) , which according to Fact 4
must be odd. Since x(t) is real, x(t 1) must also be real. So based the property, the
Fourier series coefficients must be purely imaginary and odd. Thus, b0 0 , b1 b1 .
Since time reversal and time shift cannot change the average power per period, Fact 5 holds
even if x(t) is replaced by x(t 1) . That is
1 1
2
x(t 1) dt . (3.73)
4 4 2
Using Parseval’s relation,
b1 b1 1/ 2 .
2 2
(3.74)
Finally we translate the conditions on b0 and b1 into the equivalent statement on a0 and
a1 . First, since b0 0 , Fact 4 implies that a0 0 . With k 1 , this condition implies that
a1 e j / 2 b1 jb1 jb1 . Thus, if we take b1 j / 2 , a1 1/ 2 , from Eq. (3.72),
x(t) cos(t / 2) . Alternatively, if we take b1 j / 2 , the a1 1 / 2 , and therefore,
x(t) cos(t / 2) .
The Fourier series representation of a discrete-time periodic signal is finite, as opposed to the
infinite series representation required for continuous-time periodic signals
The fundamental period is the smallest positive N for which Eq. (3.75) holds, and the
fundamental frequency is 0 2 / N .
The set of all discrete-time complex exponential signals that are periodic with period N is given
by
k[n] e jk n e jk ( 2 / N ) n ,
0
k 0, 1, 2, ...., (3.76)
All of these signals have fundamental frequencies that are multiples of 2 / N and thus are
harmonically related.
There are only N distinct signals in the set given by Eq. (3.76); this is because the discrete-time
complex exponentials which differ in frequency by a multiple of 2 are identical, that is,
Since the sequences k [n] are distinct over a range of N successive values of k, the summation in
Eq. (3.78) need include terms over this range. We indicate this by expressing the limits of the
summation as k N . That is,
Eq. (3.79) is referred to as the discrete-time Fourier series and the coefficients ak as the Fourier
series coefficients.
kN
kk
k N
k
k N
k , (3.80)
1 jk (2 / N )n . (3.81)
N n
ak jk0n 1
x[n]e x[n]e
N N n N
Eq. (3.80) is called synthesis equation and Eq. (3.81) is called analysis equation.
x[n] is periodic only if 2 / 0 is an integer, or a ratio of integer. For the case the when 2 / 0
is an integer N, that is, when
2
, (3.83)
0
N
x[n] is periodic with the fundamental period N. Expanding the signal as a sum of two complex
exponentials, we get
1 j ( 2 / N )n 1 j( 2 / N ) n
x[n] e e , (3.84)
2j 2j
1 1
a1 , a1 , (3.85)
2j 2j
and the remaining coefficients over the interval of summation are zero. As discussed previously,
these coefficients repeat with period N.
The Fourier series coefficients for this example with N 5 are illustrated in the figure below.
2M
, (3.86)
0
N
Assuming the M and N do not have any commo n factors, x[n] has a fundamental period of N.
Again expanding x[n] as a sum of two complex exponentials, we have
1 jM ( 2 / N )n 1 jM ( 2 / N ) n
x[n] e e , (3.87)
2j 2j
From which we determine by inspection that aM (1/ 2 j) , aM (1/ 2 j) , and the remaining
coefficients over one period of length N are zero. The Fourier coefficients for this example with
M 3 and N 5 are depicted in the figure below.
Example : Consider the signal
2
x[n] 1 sin n 2 4
3 cos n cos
n .
N N N 2
a0 1 ,
3 1 3 1
a j,
1
2 2j 2 2
3 1 3 1
a j,
1
2 2j 2 2
1
a j,
2
2
1
a2 j .
2
with ak 0 for other values of k in the interval of summation in the synthesis equation. The real
and imaginary parts of these coefficients for N 10 , and the magnitude and phase of the
coefficients are depicted in the figure below.
Example : Consider the square wave shown in the figure below.
N1 jk ( 2 / N ) n
1
ak
N e
n N1
, (3.88)
k e
N
and
2N1 1
a , k 0, N, 2N , .... (3.91)
k
N
The coefficients ak for 2N1 1 5 are sketched for N 10, 20, and 40 in the figure below.
The partial sums for the discrete-time square wave for M 1, 2, 3, and 4 are depicted in the
figure below, where N 9 , 2N1 1 5 .
We see for M 4 , the partial sum exactly equals to x[n]. In contrast to the continuous-time
case, there are no convergence issues and there is no Gibbs phenomenon.
3.7 Properties of Discrete-Time Fourier Series
x[n] ak
Periodic with period N and Periodic with period N
y[n fundamenta l frequency 2
] 0 bk
Multiplication x[n]y[n]
a b l k l
lN
Differentiation x[n] x[n 1] 1 e jk ( 2 / N )
k a
Integration n
1
a
x[k ] (finite valued and periodic
k 1 e jk ( 2 / N )
k
only if a0 0 )
Conjugate Symmetry for x[n] real a k a *k
Real Signals Rea Rea
k k
Imak Im a k
a k ak
a a
k k
Real and Even Signals x[n] real and even ak real and even
Real and Odd Signals x[n] real and odd ak purely imaginary and odd
xe[n] Evx[n] x[n] real
Even-Odd Decomposition
Rea k
x [n] Od x[n] x[n] real
of Real Signals
e j Ima k
Parseval’s Relation for Periodic
Signals
x[n] a k
1 2 2
T n N n N
3.7.1 Multiplication
FS a l b k l
x[n]y[n]
. (3.92)
l N
Eq. (3.92) is analogous to the convolution, except that the summation variable is now restricted
to in interval of N consecutive samples. This type of operation is referred to as a Periodic
Convolution between the two periodic sequences of Fourier coefficients.
The usual form of the convolution sum, where the summation variable ranges from to ,
is sometimes referred to as Aperiodic Convolution.
3.7.2 First Difference
1 2.
N x[n] k N
T n
2 a k
(3.94)
3.7.4 Examples
x[n]
2
n
-5 0 5
x1 [n]
1
n
-5 0 5
x2 [n]
1
n
-5 0 5
The signal x[n] may be viewed as the sum of the square wave x1 [n] with Fourier series
coefficients bk and x2 [n] with Fourier series coefficients ck .
ak bk ck , (3.95)
The sequence x2 [n] has only a dc value, which is captured by its zeroth Fourier series
coefficient:
14
c0 x [n] 1 ,
5 n0 2
(3.97)
Since the discrete-time Fourier series coefficients are periodic, it follows that ck 1 whenever k
is an integer multiple of 5.
Example : Suppose we are given the following facts about a sequence x[n]:
7
3. n2
(1)n x[n] 1.
4. x[n] has minimum power per period among the set of signals satisfying the preceding three
conditions.
1 5 x[n] 1 .
6
From Fact 2, we have a0
jn
n 0
j( 2 / 6)3 n
3 1 1
Note that (1) e
n
e , we see from Fact 3 that a
7
x[n]e j 3( 2 / N ) n .
3
6
2
6
From Parseval’s relation, the average power in x[n] is
5
P ak 2 .
k 0
Since each nonzero coefficient contributes a positive amount to P, and since the values of a0 and
a3 are specified, the value of P is minimized by choosing a1 a2 a4 a5 0 . It follows that
1 1
x[n] a a e jn (1)n ,
0 3
3 6
1/2
x[n]
1/6
n
-5 0 5
We have seen that the response of a continuous-time LTI system with impulse response h(t) to a
complex exponential signal est is the same complex exponential multiplied by a complex gain:
y(t) H (s)est ,
where
s
H (s)
h( )e
d , (3.99)
In particular, for s j , the output is y(t) H ( j)e jt . The complex functions H (s) and
H ( j ) a?re called the system function (or transfer function) and the frequency response,
respectively.
By superposition, the output of an LTI system to a periodic signal represented by a Fourier series
a a e
jk ( 2 /T ) t
x(t) k e
jk 0 t
k is given by
k k
y(t) ak H ( jk0 )e jk0t .
k
(3.99)
That is, the Fourier series coefficients bk of the periodic output y(t) are given by
bk ak H ( jk 0 ) , (3.100)
Similarly, for discrete-time signals and systems, response h[n] to a complex exponential signal
e jn is the same complex exponential multiplied by a complex gain:
where
To calculate the Fourier series coefficients of the output y(t) , we first compute the frequency
response:
1 1
H ( j ) e e j d e e , (3.103)
0 1 j 0
1 j
The output is
3
b e
jk 2t
y(t) k , (3.104)
k 3
1 1 1 1
b0 0 , b1 , b1 ,
4 1 j2 4 1 j2
1 1 1 1
b2 , b2 ,
4 1 j4 4 1 j4
1 1 1 1
b3 , b3 .
4 1 j6 4 1 j6
Example : Consider an LTI system with impulse response h[n] nu[n] , 1 1, and with
the input
2n
x[n] . (3.105)
cos
N
1 j( 2 / N ) n 1 j (2 / N ) n
x[n] e e .
2 2
e j
1
n
e
j ) n jn . (3.106)
H (e j
n 0 n 0 1e
The Fourier series for the output
H e j 2 / N e j( 2 / N ) n H e j 2 / N e j( 2 / N )n
1 1
y[n]
2 2
. (3.107)
1 1 j( 2 / N )n 1
e 1 j( 2 / N ) n
e
2 1 e j 2 1 e j
3.9 Filtering
Filtering – to change the relative amplitude of the frequency components in a signal or eliminate
some frequency components entirely.
Filtering can be conveniently accomplished through the use of LTI systems with an appropriately
chosen frequency response.
LTI systems that change the shape of the spectrum of the input signal are referred to as
frequency-shaping filters.
LTI systems that are designed to pass some frequencies essentially undistorted and significantly
attenuate or eliminate others are referred to as frequency-selective filters.
Example : A first-order low-pass filter with impulse response h(t) et u(t) cuts off the high
frequencies in a periodic input signal, while low frequency harmonics are mostly left intact. The
frequency response of this filter
1
H ( j ) e e j d . (3.107)
0 1 j
We can see that as the frequency increase, the magnitude of the frequency response of the
filter H ( j ) decreases. If the periodic input signal is a rectangular wave, then the output signal
will have its Fourier series coefficients bk given by
sin(k0T1 )
bk ak H ( jk 0 ) , k0 (3.108)
k (1 jk0 )
2T1
b0 a0 H (0) . (3.109)
T
The reduced power at high frequencies produced an output signal that is smother than the input
signal.
t
T T1 T1 T
3.10 Examples of continuous-Time Filters Described By Differential
Equations
The first-order RC circuit is one of the electrical circuits used to perform continuous-time
filtering. The circuit can perform either Lowpass or highpass filtering depending on what we
take as the output signal.
v r (t )
+
vs (t ) -
v c (t )
If we take the voltage cross the capacitor as the output, then the output voltage is related to the
input through the linear constant-coefficient differential equation:
dvc (t)
RC v c (t) vs (t) . (3.111)
dt
Assuming initial rest, the system described by Eq. (3.111) is LTI. If the input is vs(t) e jt , we
must have voltage output vc(t) H ( j )e jt . Substituting these expressions into Eq. (3.111), we
have
RC
d
H ( j )e H ( j )e
jt jt
e j t , (3.112)
dt
or
1
h(t) et / RCu(t) , (3.115)
RC
If we choose the output from the resistor, then we get an RC highpass filter.
Form the eigenfunction property of complex exponential signals, if x[n] e jn , then
y[n] H (e j )e jn , where H (e j ) is the frequency response of the system.
j 1
H (e ) . (3.117)
1 ae j
1 an1
s[n] u[n]. (3.119)
1a
From the above plots we can see that for a 0.6 the system acts as a Lowpass filter and
a 0.6 , the system is a highpass filter. In fact, for any positive value of a 1 , the system
approximates a highpass filter, and for any negative value of a 1 , the system approximates a
highpass filter, where a controls the size of bandpass, with broader pass bands as a in
decreased.
The trade-off between time domain and frequency domain characteristics, as discussed in
continuous time, also exists in the discrete-time systems.
M
y[n] b
k N
k x[n k ]. (3.120)
It is a weighted average of the (N M 1) values of x[n] , with the weights given by the
coefficients bk .
One frequently used example is a moving-average filter, where the output of y[n] is an average
of values of x[n] in the vicinity of n0 - the result corresponding a smooth operation or lowpass
filtering.
1
An example: y[n] x[n 1] x[n] x[n 1]. (3.121)
3
1
h[n] [n 1] [n] [n 1] , (3.122)
3
H (e j )
1
e j
1 e j . (3.123)
3
A generalized moving average filter can be expressed as
M
1
y[n] b x[n k] .
N M 1 k N k
(3.124)
The frequency responses with different average window lengths are plotted in the figure below.
H (e j )
1
1 e je
j j / 2
sin( / 2) . (3.127)
2
Continuous-Time Fourier Transform
4.0 Introduction
Starting from the Fourier series representation for the continuous-time periodic square wave:
1, t T1
x(t) , (4.1)
T1 t T / 2
0,
x (t )
2T T T T T T T 2T
1 1
2 2
The Fourier coefficients ak for this square wave are
2 sin(k0 T1)
ak . (4.2)
k 0T
or alternatively
74
2 sin(T1 )
Tak , (4.3)
k0
Tak becomes more and more closely spaced samples of the envelope, as T , the Fourier
series coefficients approaches the envelope function.
This example illustrates the basic idea behind Fourier’s development of a representation for
aperiodic signals.
Based on this idea, we can derive the Fourier transform for aperiodic signals.
Suppose a signal x(t) with a finite duration, that is, x(t) 0 for t T1 , as illustrated in the
figure below.
75
As T , ~
x (t) x(t) , for any infinite value of t .
x (t) ak e jk0t ,
~ (4.4)
k
1 T/2
~
a x (t)e jk 0 t dt . (4.5)
k
T T / 2
Since ~x (t) x(t) for t T / 2 , and also, since x(t) 0 outside this interval, so we have
1 T/2 1
a x(t)e jk0 t dt x(t)e jk0t dt .
k
T T / 2 T
j t
X ( j )
x(t)e
dt . (4.6)
we have for the coefficients ak ,
1
a X ( jk )
k 0
T
Then ~
x (t) can be expressed in terms of X ( j ) , that is
~ jk t
x (t) 1 X ( jk0 )e jk0t 1 X ( jk 0 )e 0 .
k T
2 k 0
(4.7)
76
As T , ~
x (t) x(t) and consequently, Eq. (4.7) becomes a representation of x(t) .
1
2
x(t) X ( j )e j t
d Inverse Fourier Transform (4.8)
and
X ( j ) x(t)e jt dt Fourier Transform (4.9)
If the signal x(t) has finite energy, that is, it is square integrable,
2
x(t) dt , (4.10)
2
x(t) dt , (4.12)
Condition 2: In any finite interval of time, x(t) have a finite number of maxima and mi nima.
Condition 3: In any finite interval of time, there are only a finite number of discontinuities.
Furthermore, each of these discontinuities is finite.
77
4.1.3 Examples of Continuous-Time Fourier Transform
The Fourier transform can be plotted in terms of the magnitude and phase, as shown in the figure
below.
1
X ( j) , X ( j ) tan 1 . (4.13)
a
a 2 2
e e j t dt e at e jt dt
a t 0
X ( j ) e e jt dt at 1
1
2
2a
0 a j a j a 2
The signal and the Fourier transform are sketched in the figure below.
78
Example : x(t) (t) . (4.14) x(t) (t) X ( j) 1
X ( j ) (t)e jt dt 1.
(4.15)
That is, the impulse has a Fourier transform consisting of equal contributions at all frequencies.
1, t T1
x(t) . (4.16)
0, t T1
x(t )
T1 T1
sin T1
T1
X ( j ) x(t)e j t dt 1e jt dt 2 .
T1
(4.17)
1 sin T1
x̂(t) 2 e jt d , (4.18)
2
2
e(t)
x(t) x̂(t) dt 0 . (4.19)
x̂(t) converges to x(t) everywhere except at the discontinuity, t T1 , where x̂(t) converges to
½, which is the average value of x(t) on both sides of the discontinuity.
In addition, the convergence of x̂(t) to x(t) also exhibits Gibbs phenomenon. Specifically, the
integral over a finite-length interval of frequencies
79
As W , this signal converges to x(t) everywhere, except at the discontinuities. More over,
the signal exhibits ripples near the discontinuities. The peak values of these ripples do not
decrease as W increases, although the ripples do become compressed toward the discontinuity,
and the energy in the ripples converges to zero.
1, W
X ( j) .
0, W
x(t) 1 sinWt
W
e jt d .
2 W t
Comparing the results in the preceding example and this example, we have
FT
Square wave Sinc function
FT 1
This means a square wave in the time domain, its Fourier transform is a sinc function. However,
if the signal in the time domain is a sinc function, then its Fourier transform is a square wave.
This property is referred to as Duality Property.
We also note that when the width of X ( j ) increases, its inverse Fourier transform x(t) will be
compressed. When W , X ( j ) converges to an impulse. The transform pair with several
different values of W is shown in the figure below.
80
4.2 The Fourier Transform for Periodic Signals
x(t) a jk0t
k e . (4.20)
k
X ( j ) 2ak ( k 0 ) .
k
(4.21)
Example : If the Fourier series coefficients for the square wave below are given
x (t )
2T T T T T T T 2T
1 1
2 2
sin k0 T1
a , (4.22)
k
k
81
Example : The Fourier transforms for x(t) sin 0 t and x(t) cos 0 t are shown in the figure
below.
82
2
2k
X ( j)
T ( T
k 0
).
The Fourier transform of a periodic impulse train in the time domain with period T is a periodic
impulse train in the frequency domain with period 2 / T , as sketched din the figure below.
Then
83
ax(t) by(t)
F
aX ( j ) bY ( j ) . (4. 20)
If x(t) F X ( j )
Then
x(t t0 )
F
e j t X ( j ) . 0
(4. 20)
Or
Thus, the effect of a time shift on a signal is to introduce into its transform a phase shift, namely,
0t .
Example : To evaluate the Fourier transform of the signal x(t) shown in the figure below.
x(t )
1.5
1
t
1 2 3 4
x2 (t ) x1 (t )
1 1
t t
3 3 1 1
2 2 2 2
x1 (t) and x2 (t) are rectangular pulse signals and their Fourier transforms are
84
2 sin( / 2) 2sin(3 / 2)
X 1( j ) and X 2( j )
Using the linearity and time -shifting properties of the Fourier transform yields
sin( / 2) 2 sin(3 / 2)
X ( j ) e j5 / 2
4.3.3 Conjugation and Conjugate Symmetry
If x(t) F X ( j )
Then
x *(t)
F
X * ( j ) . (4. 20)
Since X * ( j ) x(t)e dt
j t
x * (t)e j t dt ,
X ( j ) X * ( j ) . (4. 20)
We can also prove that if x(t) is both real and even, then X ( j ) will also be real and even.
Similarly, if x(t) is both real and odd, then X ( j ) will also be purely imaginary and odd.
A real function x(t) can be expressed in terms of the sum of an even function
xe (t) Evx(t)and an odd function xo (t) Od x(t). That is
85
F x(t) F xe (t) F xo (t),
From the preceding discussion, F x e (t) is real function and F x o (t) is purely imaginary. Thus
we conclude with x(t) real,
x(t) F X ( j )
Evx(t)
F
ReX ( j )
Example : Using the symmetry properties of the Fourier transform and the result
1
e at u(t)
F
to evaluate the Fourier transform of the signal x(t) ea t , where a 0 .
a j
Since
a t at at
2Eve u(t)
e atu(t) eat u(t) at
x(t) e e u(t) e u( t) 2
,
2
1 2a
So X ( j ) 2 Re
a j a 2
2
4.3.4 Differentiation and Integration
If x(t) F X ( j )
Then
dx(t)
F
jX ( j )
. (4. 20)
dt
1
x( )d X ( j ) X (0) ()
t F
j . (4. 20)
Example : Consider the Fourier transform of the unit step x(t) u(t) .
It is know that
86
g(t) (t)
F
1
t
x(t) g( )d
1 1
X ( j ) G(0) ( ) ( ) .
j j
where G(0) 1.
Example : Consider the Fourier transform of the function x(t) shown in the figure below.
x(t) 1
1
1 1 1
1 t
t t
1 1
= 1
+ 1 1
dx(t)
g (t)
dt
From the above figure we can see that g(t) is the sum of a rectangular pulse and two impulses.
2 sin
G( j ) e j e j
It can be found X ( j ) is purely imaginary and odd, which is consistent with the fact that x(t) is
real and odd.
87
x(t) F X ( j ) ,
Then
1 j
x(at)
F
X( ). (4. 20)
a a
From the equation we see that the signal is compressed in the time domain, the spectrum will be
extended in the frequency domain. Conversely, if the signal is extended, the corresponding
spectrum will be compressed.
x(t)
F
X ( j ) . (4. 20)
That is, reversing a signal in time also reverses its Fourier transform.
4.3.6 Duality
The duality of the Fourier transform can be demonstrated using the following example.
t T1
x (t) 1, F X 2 sinT1
1 ( j)
1
0, t T1
x (t)
sinWT1
F X 1, W
( j )
2
t 2
0, W
88
The symmetry exhibited by these two examples extends to Fourier transform in general. For any
transform pair, there is a dual pair with the time and frequency variables interchanged.
2
Example : Consider using duality and the result e t
F
X ( j ) to find the Fourier
1 2
transform G ( j ) of the signal
2
g (t) .
1t2
2
Since e t
F
X ( j ) , that is,
1 2
1 2 jt
e t 2
1 2 e d ,
2e
t 2 jt
2
1
e d
2 jt
e d F 1 1 t 2e .
2
2e
1 2 t
2
Based on the duality property we can get some other properties of Fourier transform:
dX ( j )
jtx(t )
F
d
e j 0t x(t)
F
X ( j( 0))
1
x(t) x(0) (t ) x()d
F
jt
89
4.3.7 Parseval’s Relation
If x(t) F X ( j ) ,
We have
1
d
2
2
x(t) dt X ( j )
2
Parseval’s relation states that the total energy may be determined either by computing the energy
2
per unit time x(t) and integrating over all time or by computing the energy per unit frequency
X ( j ) / 2 and integrating over all frequencies. For this reason, X ( j)
2 2
is often referred to
as the energy-density spectrum.
H ( j ) , the transform of the impulse response, is the frequency response of the LTI system,
which also completely characterizes an LTI system.
dx(t)
y(t) .
dt
Y ( j ) jX ( j ) ,
Y( j
H ( j ) ) j .
X ( j)
90
t
y(t) x( )d .
The impulse response of an integrator is the unit step, and therefore the frequency response of
the system:
1
H ( j ) () .
j
So we have
1
Y ( j ) H ( j ) X ( j ) X ( j ) X (0) () ,
j
which is consistent with the integration property.
1
X ( j) , and
b j
1
H ( j ) .
a j
Therefore,
Y ( j ) 1 ,
a j b j
The inverse transform for each of the two terms can be written directly. Using the linearity
property, we have
91
y(t)
1
b a
e at u(t) e bt u(t) .
We should note that when a b , the above partial fraction expansion is not valid. However,
with a b , we have
1
Y ( j ) ,
a j 2
1
e at u(t)
F
, and
a j
d 1
te at u(t) F j d a j ,
so we have
1
2
R( j )
r(t) s(t ) p(t) S ( j )P( j( ))d
Multiplication of one signal by another can be thought of as one signal to scale or modulate the
amplitude of the other, and consequently, the multiplication of two signals is often referred to as
amplitude modulation.
Example : Let s(t) be a signal whose spectrum S ( j ) is depicted in the figure below.
92
Also consider the signal
p(t) cos0 t ,
then
P( j ) ( 0 ) ( 0 ) .
The spectrum of r(t) s(t) p(t) is obtained by using the multiplication property,
1
R( j ) S ( j )P( j( ))d
2
,
1
S ( j ) 1 S ( j )
0
2 2
0
From the figure we can see that the signal is preserved although the information has been shifted
to higher frequencies. This forms the basic for sinusoidal amplitude modulation systems for
communications.
Example : If we perform the following multiplication using the signal r(t) obtained in the
preceding example and p(t) cos0 t , that is,
The spectrum of P( j) , R( j) and G ( j ) are plotted in the figure below.
93
If we use a lowpass filter with frequency response H ( j ) that is constant at low frequencies and
zero at high frequencies, then the output will be a scaled replica of S ( j ) . Then the output will
be scaled version of s(t) - the modulated signal is recovered.
94
4.6 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs
95
96
4.7 System Characterized by Linear Constant-Coefficient Differential
Equations
N
d k y(t) M
d k x(t)
ak dt k k 0
bk
dt k
, (4. 67)
k 0
Y ( j )
H ( j ) , (4. 68)
X ( j )
where X ( j ) , Y ( j) and H ( j ) are the Fourier transforms of the input x(t) , output y(t) and
the impulse response h(t) , respectively.
dy(t)
ay(t) x(t) , with a 0 .
dt
97
1
H ( j ) .
j a
Example : Consider a stable LTI system that is characterized by the differential equation
( j) 2 j 2
H ( j ) .
( j ) 4( j ) 3 j 1 j 3
2
1/ 2 1 / 2
H ( j ) .
j 1 j 3
98
After successful completion of the course, students will be able to:
CO No. Course Outcomes Knowledge
Level
(Bloom’s Taxonomy)
CO 5 Identify the linearity and time invariance properties for obtaining the
Apply
behavior of linear time invariant system.
CO 6 Classify the ideal low pass, high pass, band pass and band stop filters
for determining the signal and system bandwidth. Understand
Program
Course Program Outcomes Specific
Outcomes Outcomes
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO 5 √ √ - - - - - - - - - - - - -
CO 6 √ - - - - - - - - - - - - - -
MODULE – III
SIGNAL TRANSMISSION THROUGH LINEAR SYSTEMS
Linear System, Impulse response, Response of a Linear System, Linear Time Invariant(LTI) System,
Linear Time Variant (LTV) System, Transfer function of a LTI System, Filter characteristic of Linear
System, Distortion less transmission through a system, Signal bandwidth, System Bandwidth, Ideal
LPF, HPF, and BPF characteristics.
Causality and Paley-Wiener criterion for physical realization, Relationship between Bandwidth and
rise time, Convolution and Correlation of Signals, Concept of convolution in Time domain and
Frequency domain, Graphical representation of Convolution.
Linear systems
A system is said to be a linear if it obeys homogeneity and additivity properties. This implies that the
response of a linear system to weighted sum of input signals is equal to the same weighted sum of responses
of the system to each of those signals.
Homogeneity property: This property says if input signal weighted by any arbitrary constant then output
signal also weighted by same arbitrary constant
Additive property: Response of system to sum of two input signals is equal to sum of individual response of
the system.
Linear Time Variant (LTV) System: A system said to be LTV if it satisfies the linear property but not the time
invariant. For LTV system, if input delayed by t0 seconds, the system satisfies superposition and homogeneity properties
but output varies with time t0. A LTV system whose parameters change with time. The coefficients in the differential
equations are time variant.
Signal approximation
Impulse response of LTI system due to an impulse input applied at t=0 is h (t)
Hence
This is known as convolution integral and it gives relationship among input signal, output signal and impulse response of
system.LTI system completely characterized by impulse response
Fourier transform of input x(t) , output y(t) and impulse response h(t) are X(ω) , Y(ω) and H(ω) respectively.
Magnitude response is symmetric and phase response is anti symmetric.
Response to Eigen functions
If input to the system is an exponential function then output y(t)
Output is a complex exponential of the same frequency as input multiplied by the complex constant . An inputs
signal is called Eigen functions of the system if the corresponding output is a constant multiple of the input signal. Thus
the functions all Eigen functions as we get the same function the output as in input.
Properties of LTI system
Commutative Property
Associate property
This implies that a cascading of two or more LTI system will results to single system with impulse response equal to the
convolution of the impulse response of the cascading systems.
Distributive Property
This property gives that addition of two or more LTI system subjected to same input will results single system with
impulse response equal to the sum of impulse response of two or more individual systems.
Static and dynamic system
A system is static or memory less if its output at any time depends only on the value of its input at that instant of time. For
LTI systems, this property can hold if its impulse response is itself an impulse. But convolution property, we know that
the output depends on the previous samples of the input, therefore an LTI system has memory and hence it is dynamic
system.
Causality
A continuous time LTI system is said to causal if and only if it impulse response is h(t) = 0 for t<0, then integral
becomes
Stability: a continuous time system is bounded input , bounded output stable if and only if the impulse response is
absolutely Integrable.
Consider LTI system with impulse response h(t) . the output y(t) is
For bounded output y(t) < ꝏ , the impulse response should be absolutely integrable. Hence
Above equation gives necessary and sufficient condition for BIBO stability.
Inevitability:
A system T said to be invertible if and only if there exits an inverse system T-1 for such that T T-1 is an identical system .
For an LTI system with impulse response h1(t), this is equivalent to the existence of another system with impulse response
h2(t) such that h1(t)* h2(t) = δ(t).
Inverse Fourier transforms of gives the impulse response of the system. That is h(t) = IFT of
In general Input and output relationship of continuous time causal LTI system described by linear constant coefficient
differential equations with zero initial conditions is given by
Where are constant coefficients the order N refer to the highest derivative of y(t) in above equation.
Apply Fourier Transform on both sides of above equation
This condition known as the Paley – Wiener criterion. To satisfy this condition the function must be square
integrable that is
All causal systems that satisfy the Paley – Wiener criterion are physically realizable.
Magnetude function may be zero at some discrete frequencies but it cannot be zero over finite band of
frequencies since this will cause the integral to become infinite. Therefore Idle filters are not physically realizable. It can
be concluding that magnetude function cannot fall off to zero faster than exponential order.
is permissible
this Gaussian error curve is not permissible.
But it possible to construct physically realizable filters close to the ideal filter characteristics. Low pass filter having
transfer function
Where an arbitrary small value, produces nearly ideal characteristics shown in fig below
Band Width and Rise Time:
The system band width can be obtained from rise time , which can be derived from output response of the system.
Rise time : the rise time tr of the output response is defined as the time the response takes to reach from 10% to 90% of
the maximum value of the signal or in general it is the time of response to reach from zero to the final value of the signal.
Thus the output of any continuous LTI system is the convolution of the input x(t) with impulse response h(t) of the
system.
Case I : if input signal is causal that is x(t) = 0 for t<0
Case II
System is causal that is h(t) =0 for t<0 then
Case III
Both input signal and system are causal then
Associate property
Shift property
If the signal shifted by sec then convolution of
and
If shifted by and respectively
=
Convolution of function with unit step
Any arbitrary function x(t) with unit step function u(t)
Proof
Width property
Let us consider finite duration of two signals are T1 and T2 respectively then duration of y(t) =
is equal to the sum of duration of .
T = T1 + T2
Also its area under finite signals are A1 and A2 respectively then the area under y (t) is product of
both areas
A = area under y (t) = area under and area under = A1 A2
==
Thus convolution in one domain is transformed a product operation in the other domain
Program
Course Program Outcomes Specific
Outcomes Outcomes
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO 7 - √ - - - - - - - - - - - - -
CO 8 - √ √ - - - - - - - - - √ - -
Lecture Notes Signals & Systems
MODULE – IV
LAPLACE TRANSFORM AND Z TRANSFORM
Laplace Transforms: Laplace Transforms (L.T), Inverse Laplace Transform, Concept of Region of
Convergence (ROC) for Laplace Transforms, Properties of L.T, Relation between L.T and F.T of a signal,
Laplace Transform of certain signals using waveform synthesis.
Z–Transforms: Concept of Z- Transform of a Discrete Sequence, Distinction between Laplace, Fourier and Z
Transforms, Region of Convergence in Z-Transform, Constraints on ROC for various classes of signals, Inverse
Z-transform, Properties of Z-transforms
we know that for a continuous-time LTI system with impulse response h(t), the output y(t)of the
system to the complex exponential input of the form est is
A. Definition:
The function H(s) is referred to as the Laplace transform of h(t). For a general continuous-time signal
x(t), the Laplace transform X(s) is defined as
We know that
Dirichlet's conditions are used to define the existence of Laplace transform. i.e.
The function f has finite number of maxima and minima.
There must be finite number of discontinuities in the signal f ,in the given interval of time.
Lecture Notes Signals & Systems
It must be absolutely integrable in the given interval of time. i.e.
Region of convergence.
The range variation of ζ for which the Laplace transform converges is called region of
convergence.
If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane.
If x(t) is a two sided sequence then ROC is the combination of two regions.
Example 1: Find the Laplace transform and ROC of x(t)=e− at u(t) x(t)=e−atu(t)
Example 2: Find the Laplace transform and ROC of x(t)=e at u(−t) x(t)=eatu(−t)
Lecture Notes Signals & Systems
Example 3: Find the Laplace transform and ROC of x(t)=e −at u(t)+e at u(−t)
x(t)=e−atu(t)+eatu(−t)
ROC: −a<Res<a
Lecture Notes Signals & Systems
A system is said to be stable when all poles of its transfer function lay on the left half of
s-plane.
A system is said to be unstable when at least one pole of its transfer function is shifted to
the right half of s-plane.
Lecture Notes Signals & Systems
A system is said to be marginally stable when at least one pole of its transfer function
lies on the jω axis of s-plane
Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful
mathematical tool to convert differential equations into algebraic equations.
The bilateral (two sided) z-transform of a discrete time signal x(n) is given as
The unilateral (one sided) z-transform of a discrete time signal x(n) is given as
Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT) does
not exist.
Z-transform of a discrete time signal x(n) can be represented with X(Z), and it is defined as
The above equation represents the relation between Fourier transform and Z-transform
Inverse Z-transform:
Z-Transform Properties:
Linearity Property:
Convolution Property
Correlation Property
Initial value and final value theorems of z-transform are defined for causal signal.
For a causal signal x(n), the initial value theorem states that
This is used to find the initial value of the signal without taking inverse z-transform
This is used to find the final value of the signal without taking inverse z-transform
The range of variation of z for which z-transform converges is called region of convergence of z-
transform.
If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is entire z-plane
except at z = 0.
If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC is entire
z-plane except at z = ∞.
If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius a.
i.e. |z| > a.
If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with radius
a. i.e. |z| < a.
If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane except at z
= 0 & z = ∞.
The plot of ROC has two conditions as a > 1 and a < 1, as we do not know a.
In The transfer function H[Z], the order of numerator cannot be grater than the order of
denominator.
Inverse Z transform:
Three different methods are:
1. Partial fraction method
2. Power series method
3. Long division method
For z not equal to zero or infinity, each term in X(z) will be finite and consequently X(z) will
converge. Note that X ( z ) includes both positive powers of z and negative powers of z. Thus,
from the result we conclude that the ROC of X ( z ) is 0 < lzl < m.
Example: Consider the sequence
Sol:
From the above equation we see that there is a pole of ( N - 1)th order at z = 0 and a pole at z = a .
Since x[n] is a finite sequence and is zero for n < 0, the ROC is IzI > 0. The N roots of the
numerator polynomial are at
Program
Course Program Outcomes Specific
Outcomes Outcomes
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO 9 √ √ - - - - - - - - - - - - -
CO 10 - √ - - - - - - - - - - √ - -
CO 11 - √ √ - - - - - - - - - - - -
CO 12 √ - - - - - - - - - - - - - -
Here, you can observe that the sampled signal takes the period of impulse. The process of sampling can be explained
by the following mathematical expression:
To reconstruct x(t), you must recover input signal spectrum X(ω) from sampled signal spectrum Y(ω), which is
possible when there is no overlapping between the cycles of Y(ω).
There are three types of sampling techniques:
Impulse sampling.
CREC Dept. of ECE Page 105
Natural sampling.
Lecture Notes Signals & Systems
Flat Top sampling.
Impulse Sampling
Impulse sampling can be performed by multiplying input signal x(t) with impulse train of
period 'T'. Here, the amplitude of impulse changes with respect to amplitude of input signal x(t). The output of
sampler is given by
To get the
spectrum of sampled signal, consider Fourier transform of equation 1 on both sides
This is called ideal sampling or impulse sampling. You cannot use this practically because pulse width cannot be
zero and the generation of impulse train is not possible practically.
Natural Sampling
Natural sampling is similar to impulse sampling, except the impulse train is replaced by pulse train of period T. i.e.
you multiply input signal x(t) to pulse train
Theoretically, the sampled signal can be obtained by convolution of rectangular pulse p(t) with ideally sampled
signal say yδ(t) as shown in the diagram:
Nyquist Rate
It is the minimum sampling rate at which signal can be converted into samples and can be recovered back without
distortion.
Nyquist rate fN = 2fm hz
Nyquist interval = 1/fN = 1/2fm seconds.
Reconstruction of signal from its samples:
Reconstruction
Assume that the Nyquist requirement ω0 > 2ωm is satisfied. We consider two reconstruction schemes:
• ideal reconstruction (with ideal bandlimited interpolation),
• reconstruction with zero-order hold.
Ideal Reconstruction: Shannon interpolation formula
which is the Shannon interpolation (reconstruction) formula. The actual reconstruction system mixes continuous and
discrete time.
The reconstructed signal xr(t) is a train of sinc pulses scaled by the samples x[n]. • This system is difficult to
implement because each sinc pulse extends over a long (theoretically infinite) time interval.
Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be removed by
considering fs >2fm
To overcome this, the band pass theorem states that the input signal x(t) can be converted into its samples and can be
recovered back without distortion when sampling frequency fs < 2f2.
Also,
Correlation
Cross Correlation and Auto Correlation of Functions:
Correlation
Correlation is a measure of similarity between two signals. The general formula for correlation is
Auto correlation function and energy spectral densities are Fourier transform pairs. i.e.
F.T[R(τ)]=SXX(ω)
SXX(ω)= ∫R(τ)e−jωτdτ where -∞ < τ<∞
R(τ)=x(τ)∗x(−τ)
R12(τ)≠R21(−τ)
If R12(0) = 0 means, if ∫x1(t)x∗2(t)dt=0 over interval(-∞,∞), then the two signals are said to be orthogonal.
Cross correlation function corresponds to the multiplication of spectrums of one signal to the complex
conjugate of spectrum of another signal. i.e.
R12(τ)←→X1(ω)X∗2(ω)
This also called as correlation theorem.
Energy Density Spectrum:
Parseval’s Theorem:
The spectrum of a real valued process (or even a complex process using the above definition) is real and
an even function of frequency:
Z-Transform Properties:
Linearity Property:
Convolution Property
Correlation Property
Initial value and final value theorems of z-transform are defined for causal signal.
For a causal signal x(n), the initial value theorem states that
This is used to find the initial value of the signal without taking inverse z-transform
This is used to find the final value of the signal without taking inverse z-transform
The range of variation of z for which z-transform converges is called region of convergence of z-
transform.
If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is entire
z-plane except at z = 0.
If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC is
entire z-plane except at z = ∞.
If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius a.
i.e. |z| > a.
If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with radius
a. i.e. |z| < a.
If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane except at z
= 0 & z = ∞.
The plot of ROC has two conditions as a > 1 and a < 1, as we do not know a.
CREC Dept. of ECE Page 97
Lecture Notes Signals & Systems
In The transfer function H[Z], the order of numerator cannot be grater than the order of
denominator.
Inverse Z transform:
Three different methods are:
4. Partial fraction method
5. Power series method
6. Long division method
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
For z not equal to zero or infinity, each term in X(z) will be finite and consequently X(z) will
converge. Note that X ( z ) includes both positive powers of z and negative powers of z. Thus, from
the result we conclude that the ROC of X ( z ) is 0 < lzl < m.
Example: Consider the sequence
Sol:
From the above equation we see that there is a pole of ( N - 1)th order at z = 0 and a pole at z = a . Since
x[n] is a finite sequence and is zero for n < 0, the ROC is IzI > 0. The N roots of the numerator
polynomial are at
[Type here]
[Type here]
MODULE – V
SAMPLING THEOREM
Graphical and analytical proof for Band Limited Signals, Impulse Sampling, Natural and Flat top
Sampling, Reconstruction of signal from its samples, Effect of under sampling – Aliasing,
Introduction to Band Pass Sampling. Correlation: Cross Correlation and Auto Correlation of
Functions, Properties of Correlation Functions, Energy Density Spectrum, Parseval’s Theorem,
Power Density Spectrum, Relation between Autocorrelation Function and Energy/Power Spectral
Density Function, Relation between Convolution and Correlation, Detection of Periodic Signals in
the presence of Noise by Correlation, Extraction of Signal from Noise by filtering.
Graphical and analytical proof for Band Limited Signals:
Sampling theorem: A continuous time signal can be represented in its samples and can be recovered back
when sampling frequency fs is greater than or equal to the twice the highest frequency component of message
signal. i. e.
fs≥2fm
Proof: Consider a continuous time signal x(t). The spectrum of x(t) is a band limited to fm Hz i.e. the spectrum
of x(t) is zero for |ω|>ωm.Sampling of input signal x(t) can be obtained by multiplying x(t) with an impulse train
δ(t) of period Ts. The output of multiplier is a discrete signal called sampled signal which is represented with y(t)
in the following diagrams:
Here, you can observe that the sampled signal takes the period of impulse. The process of sampling can be
explained by the following mathematical expression:
[Type here]
[Type here]
To reconstruct x(t), you must recover input signal spectrum X(ω) from sampled signal spectrum Y(ω), which is
possible when there is no overlapping between the cycles of Y(ω).
[Type here]
[Type here]
Natural sampling.
Impulse Sampling
Impulse sampling can be performed by multiplying input signal x(t) with impulse train of
period 'T'. Here, the amplitude of impulse changes with respect to amplitude of input signal x(t). The output of
sampler is given by
To get the
spectrum of sampled signal, consider Fourier transform of equation 1 on both sides
This is called ideal sampling or impulse sampling. You cannot use this practically because pulse width cannot be
zero and the generation of impulse train is not possible practically.
Natural Sampling
Natural sampling is similar to impulse sampling, except the impulse train is replaced by pulse train of period T.
i.e. you multiply input signal x(t) to pulse train
[Type here]
[Type here]
Theoretically,
the sampled signal can be obtained by convolution of rectangular pulse p(t) with ideally sampled signal say yδ(t)
as shown in the diagram:
[Type here]
[Type here]
Nyquist Rate
It is the minimum sampling rate at which signal can be converted into samples and can be recovered back
without distortion.
Nyquist rate fN = 2fm hz
Nyquist interval = 1/fN = 1/2fm seconds.
Reconstruction of signal from its samples:
Reconstruction
Assume that the Nyquist requirement ω0 > 2ωm is satisfied. We consider two reconstruction schemes:
• ideal reconstruction (with ideal bandlimited interpolation),
• reconstruction with zero-order hold.
Ideal Reconstruction: Shannon interpolation formula
[Type here]
[Type here]
which is the Shannon interpolation (reconstruction) formula. The actual reconstruction system mixes continuous
and discrete time.
The reconstructed signal xr(t) is a train of sinc pulses scaled by the samples x[n]. • This system is difficult to
implement because each sinc pulse extends over a long (theoretically infinite) time interval.
[Type here]
[Type here]
[Type here]
[Type here]
Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be removed by
considering fs >2fm
To overcome this, the band pass theorem states that the input signal x(t) can be converted into its samples and
can be recovered back without distortion when sampling frequency fs < 2f2.
Also,
[Type here]
[Type here]
[Type here]
[Type here]
Correlation
Cross Correlation and Auto Correlation of Functions:
Correlation
Correlation is a measure of similarity between two signals. The general formula for correlation is
Auto correlation
Cross correlation
[Type here]
[Type here]
[Type here]
[Type here]
Auto correlation function of energy signal at origin i.e. at τ =0 is equal to total energy of that
signal, which is given as:
Auto correlation function and energy spectral densities are Fourier transform pairs. i.e.
F.T[R(τ)]=SXX(ω)
SXX(ω)= ∫R(τ)e−jωτdτ where -∞ < τ<∞
R(τ)=x(τ)∗x(−τ)
[Type here]
[Type here]
R12(τ)≠R21(−τ)
If R12(0) = 0 means, if ∫x1(t)x∗2(t)dt=0 over interval(-∞,∞), then the two signals are said to be orthogonal.
Cross correlation function corresponds to the multiplication of spectrums of one signal to the complex
conjugate of spectrum of another signal. i.e.
R12(τ)←→X1(ω)X∗2(ω)
This also called as correlation theorem.
Energy Density Spectrum:
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here,
the term energy is used in the generalized sense of signal processing;
Energy density spectrum can be calculated using the formula:
Parseval’s Theorem:
[Type here]
[Type here]
The spectrum of a real valued process (or even a complex process using the above definition) is real and
an even function of frequency:
If the process is continuous and purely indeterministic, the autocovariance function can be reconstructed
by using the Inverse Fourier transform
The PSD can be used to compute the variance (net power) of a process by integrating over frequency:
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
[Type here]
Correlation
Cross Correlation and Auto Correlation of Functions:
Correlation
Correlation is a measure of similarity between two signals. The general formula for correlation is
Cross correlation
[Type here]
[Type here]
[Type here]
[Type here]
Auto correlation function of energy signal at origin i.e. at τ =0 is equal to total energy of that
signal, which is given as:
Auto correlation function and energy spectral densities are Fourier transform pairs. i.e.
F.T[R(τ)]=SXX(ω)
SXX(ω)= ∫R(τ)e−jωτdτ where -∞ < τ<∞
R(τ)=x(τ)∗x(−τ)
[Type here]
[Type here]
R12(τ)≠R21(−τ)
If R12(0) = 0 means, if ∫x1(t)x∗2(t)dt=0 over interval(-∞,∞), then the two signals are said to be orthogonal.
Cross correlation function corresponds to the multiplication of spectrums of one signal to the complex
conjugate of spectrum of another signal. i.e.
R12(τ)←→X1(ω)X∗2(ω)
This also called as correlation theorem.
Energy Density Spectrum:
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here,
the term energy is used in the generalized sense of signal processing;
Energy density spectrum can be calculated using the formula:
Parseval’s Theorem:
[Type here]
[Type here]
The spectrum of a real valued process (or even a complex process using the above definition) is real and
an even function of frequency:
If the process is continuous and purely indeterministic, the autocovariance function can be reconstructed
by using the Inverse Fourier transform
The PSD can be used to compute the variance (net power) of a process by integrating over frequency:
[Type here]