Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
31 views28 pages

Chapter3 23

Uploaded by

haithamnoruldeen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views28 pages

Chapter3 23

Uploaded by

haithamnoruldeen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 3

Laplace Transform

The ordinary operations of algebra suffice


to resolve problems in the theory of curves.
Joseph-Louis Lagrange

Such is the advantage of a well constructed


language that its simplified notation often
becomes the source of profound theories.
Pierre-Simon Laplace

This chapter introduces the bilateral (or two-sided) Laplace transform as a close relative
and in fact, an extension, to the Fourier transform. Various properties are derived and
displayed via useful commutative diagrams. For one sided functions and problems, which
are only defined on the positive time axis, the unilateral (or one-sided) transform is obtained
as special case and its properties are derived. The power of the Laplace transform in solving
initial value problems for LTI-ODE’s and systems is illustrated. This is the last chapter
introducing basic theory in signal representations and operations on signals that will be
used in the chapters to follow on system dynamics and control.

54
Erik I. Verriest Chapter 3: Laplace Transform 55

3.1 Fourier Transform and Phasors


What was so exciting about the Fourier Transform?
The excitement stems from the fact that it is a:
1) Useful tool in understanding what makes up a signal: Instead of considering a signal as a
set of values, one for each value of t, consider the signal as being made up of sinusoids.

A sinusoid is a wiggly signal with fixed radial frequency, ω, having the explicit form A cos(ωt+
φ). A sinusoid is determined by a complex phasor, A = Aejφ ∈ C, and a frequency ω, so that
the signal can be retrieved, given this information as, σt x = ℜ(Aejωt ). A signal made up of
a bunch of sinusoids is then characterized by a set of phasors, parameterized by frequency.
Finally, a Fourier transformable signal is characterized by a continuum of phasors1 , X(jω),
one for each frequency ω. One has the decomposition,
Z ∞
1
x(t) = X(jω)ejωt dω
2π −∞

which also says that x is a superposition of a continuum of complex exponentials. This


formula is recognized as the inverse Fourier transform. The phasors at a particular frequency
are obtained as the (forward) Fourier transform
Z ∞
X(jω) = x(t)e−jωt dt.
−∞

In principle, one could decompose a signal in many different waves, e.g., using piano waves
as basic signals. So what is it that singles out the (complex) wiggly functions of the form
ejωt as a particularly useful set? Enter the LTI-systems.

2) Useful tool in computing solutions.


One of the basic properties of LTI systems is: “Sinusoid in, sinusoid out”, or more quanti-
tatively, if Uejωt goes in, then Y ejωt comes out, where

Y = H(jω)U
1
The factor j in the argument is not necessary, but will be appreciated later on
Erik I. Verriest Chapter 3: Laplace Transform 56

H(jω) is of course, the Fourier transform of the impulse response h(t). Again using linearity,
the output, corresponding to the input
Z ∞
1
u(t) = U(jω) ejωt dω
2π −∞

is then ∞
1
Z
y(t) = H(jω)U(jω) ejω,t dω.
2π −∞

or, in the Fourier domain,


Y (jω) = H(jω)U(jω)
Therefore differential equations and system equations could very easily be solved by Fourier
transforming. This reduces the LTI-ODE indeed to an algebraic equation, and the solution
y(t) in the time domain is then obtained by inverse Fourier transforming.

3.2 Dirichlet Conditions


The above illustrated the value of the Fourier transform for understanding (what is a signal
composed of), and as a practical tool for solving problems. This seems excellent, but there
is a major drawback: Not every signal has a Fourier transform.

Dirichlet could show that the transform of x is guaranteed to exist (i.e., the integral con-
verges) if three conditions are satisfied:
R∞
1. −∞ |x(t)| dt < ∞.

2. x has only a finite number of extrema in any finite interval.

3. x has only a finite number of discontinuities in any finite interval.

These conditions are not necessary.


We shall indicate the class of functions that satisfy the Dirichlet conditions as “Dir”, and
use the notion f ∈ Dir to indicate such a function f .
Erik I. Verriest Chapter 3: Laplace Transform 57

3.3 From Fourier to Laplace


It would be nice, in view of the simplicity in solving ODE’s among other related problems, if
all functions could be Fourier transformed. This is not the case for many signals of practical
interest because the first Dirichlet condition may not hold. The trick is to tame a function,
e.g., by multiplying the function by an appropriate exponential, e−σt to make sure that
the tails for t → ±∞ approach zero sufficiently fast in order for the integral of the tamed
function to converge. In this sense, the Laplace transform of g, denoted L{ g}, is defined as
the Fourier transform of the tamed function g(t)e−σt =: g σ (t),
Z ∞ Z ∞
−σt −jωt −σt
G(s) = L{g(t)} = F {e g(t)} = e e g(t) dt = e−st g(t) dt,
−∞ −∞

expressed in terms of s = σ + jω. The real parameter σ is chosen in such a way that
g σ (t) = e−σt g(t) is Fourier transformable, and Gσ (jω) = F {g σ (t)} evaluated at jω. This of
course may limit the admissible values of σ, e.g., σ1 < σ < σ2 . We say that the abscissa
of convergence is the interval (σ1 , σ2 ). If the integral absolutely converges, then we say
that the corresponding vertical strip in the complex plane with σ1 < Re s < σ2 is the
Region of Convergence (RoC) of the Laplace transform. Note that thus L{g} = G(s) =
def
G(σ + jω) = Gσ (jω) = F {g σ }. Consequently, the properties for the bilateral Laplace
transform are directly related to the properties of the Fourier transform.

3.4 Region of Convergence


It is very important to note that this (bilateral) Laplace transform G(s) does not uniquely
specify the time function. The RoC plays a major role and needs to be specified as well.

1
Example: The functions et u−1(t) and −et u−1 (−t) have the same analytic expression, s−1 ,
for their Laplace transform. However, the first converges for ℜs > 1, the second for ℜs < 1.
Thus, in order to have a one-to-one correspondence between time functions and Laplace
transforms, it is necessary to not only give the analytical expression of F (s), but also its
Region of Convergence (RoC). Note that the RoC of x is defined by its real part (abscissa
of convergence) and is therefore usually a vertical strip. It can never contain a singularity
Erik I. Verriest Chapter 3: Laplace Transform 58

(pole) of the Laplace transform X(s). A half-plane or even all of C is to be considered a


special case of a strip.

3.5 Elementary Transforms and Properties of BLT


The singular (generalized) function δ has Laplace transform
Z ∞
δ(t) e−st dt = 1.
−∞

with the entire complex plane as its region of convergence. We can also take the Laplace
transform of higher singularities, using integration by parts, or the derivative property (see
further). The Laplace transform of the n-th derivative of δ is sn , and its RoC is the entire
complex plane.
Likewise, the Laplace transform of the distribution Tt0 δ, peaking at t = −t0 gives

σs L(Tt0 δ) = e−st0 .

A function consisting of the product of polynomials and exponentials, such as the one ob-
tained above is called a quasi-polynomial. Quasi-polynomials play an important role in the
theory of differential delay systems (see further).
The unit step function, u−1 , has transform
Z ∞ Z ∞
−st 1
L{u−1} = u−1(t)e dt = e−st dt = .
−∞ 0 s

Its RoC is to the right of the imaginary axis: ℜs > 0. It turns out that if one knows
these Laplace transforms, most signals a typical engineer will encounter can be transformed
if one knows what an elementary manipulation on a signal does to its Laplace transform.
The student may already have encountered the unilateral Laplace transform in a course on
electrical circuits, and seen tables of correspondence between x(t) and X(s). We must then
caution that the properties to follow are derived for the bilateral Laplace transform. There
are some differences with the properties of the unilateral transform.
Erik I. Verriest Chapter 3: Laplace Transform 59

3.5.1 Properties of Bilateral Laplace Transform


For the student who knows how to derive the properties of the usual one-sided Laplace trans-
form (see further), the following correspondences with the elementary operators described
in Chapter 1, are easily derived by chasing the signals around the diagrams. Note that
the determination of the RoC must not be overlooked, as the RoC is an integral piece of
information for the bilateral Laplace transform. We also need one more simplification in
notation: If S is a subset in C, then αS + β denotes the subset of points {αc + β | c ∈ S}.

i) Shift Tα

T
α
f −→ Tα f
L ↓ L ↓
Mexp −sα
F (s) −→ e−sα F (s)
RoC RoC
The R0C is left unchanged.

ii) Multiplication by the independent variable Q

Q
f −→ Qf
L ↓ L ↓
−D
F (s) −→ −F ′ (s)
RoC RoC

iii) Parity or Reflection R

R
f −→ Rf
L ↓ L ↓
R
F (s) −→ F (−s)
RoC −RoC
Erik I. Verriest Chapter 3: Laplace Transform 60

iv) Scaling Sα , (α > 0)

α S
f −→ Sα f
L ↓ L ↓
1
S1
α α
1 s

F (s) −→ α
F α
RoC α RoC

vi) Differentiation D

D
f −→ g = f ′
L ↓ L ↓
Q
F (s) −→ sF (s)
RoCf ⊆ RoCg
R Rt
vii) Integration −∞
, where g(t) = −∞
f (τ ) dτ
R
−∞
f −→ g
L ↓ L ↓
Q−1 F (s)
F (s) −→ s
RoCf {s|ℜs > 0} ∩ RoCf

vii) Convolution Cg
∗g
f −→ f ∗g
L ↓ L ↓
G M
F (s) −→ F (s)G(s)
RoCf RoCg ∩ RoCf

iix) Multiplication by an exponential Meat


Erik I. Verriest Chapter 3: Laplace Transform 61

We illustrate this one: Let g(t) = f (t) eat for arbitrary complex a
Z ∞
at
L{e f (t)}(s) = f (t)eat e−st dt
Z−∞

= f (t)e−(s−a)t dt
−∞
= L{f (t)}(s − a)

Extending the shift operator to functions of complex variables in the obvious way, we
may denote this correspondence by

L{eat f (t)} = Ta L{f (t)}

or somewhat more abstractly, as the commutation rule

LMeat = Ta L.

Note that the integral converges if s − a ∈ RoCf . Thus RoCg = RoCf + a.

By concatenating these elementary operations on simple exponentials (including the com-


plex exponentials) one can obtain the Laplace transform of not so elementary functions. For
most practical purposes, it actually suffices to know just one transform pair: for arbitrary
a ∈ C (recall: H = u−1 )
1
eat H(t) ↔ ,
s−a
with RoC ℜs > ℜa. So powerful are these relations! In fact, even this last one follows from
the transform of the unit step function and property (iix).

Example: To find the Laplace transform of a pulse with parabolic shape of duration T , given
by f (t) = t(T − t)[u−1 (t) − u−1 (t − T )], one can proceed by direct integration. However, note
that two differentiations of f give

f¨(t) = −2[u−1 (t) − u−1(t − T )] + T δ(t) + T δ(t − T ).


Erik I. Verriest Chapter 3: Laplace Transform 62

The delta functions arise since f ′ (t) contains a jump of size T at t = 0 and t = T . The
Laplace transform of f ′′ (t) is readily found as

1 e−sT
 
−2 − + T + T e−sT
s s

using linearity, and the differentiation and shift properties of the Laplace transform (and
the knowledge Lu−1 = 1s or even simpler, Lu0 = 1 ). Since going from f ′′ to f requires two
integrations,
1 e−sT
   
1 −sT
F (s) = 2 −2 − + T + Te .
s s s

Application: Explicit system operator from an implicit system representation.


Let the input, u, and output, y of a system be related by the LTI-ODE, valid for all time:
a(D)y = b(D)u. Then a combination of the commutative diagrams leads to the diagram
a(D) b(D)
y −→ a(D)y = b(D)u ← u
L ↓ L ↓ L ↓ L ↓
a(s) b(s)
Y (s) −→ a(s)Y (s) = b(s)U(s) ← U(s)

It follows that in the Laplace transform domain, we have a constructive form for the
system operator:
b(s)
Y (s) = U(s).
a(s)
What can be said about the region of convergence of Y ?

3.6 One Sided Transforms and Singularities


In many situations one will only be concerned with one-sided functions, e.g., once an input is
applied, we may be interested in the resulting output of a system. In most cases of interest,
one has therefore to deal with signals satisfying x(t) = 0 for t < 0, letting t = 0 be the time
the switch is thrown. In other cases, the signal may not be zero for negative time, but its
Erik I. Verriest Chapter 3: Laplace Transform 63

actual value at such times may not be of any concern. In all these cases, nothing is lost if
we just consider Π+ x. It follows that the Laplace transform of a one-sided function may be
defined as Z ∞
Lx = x(t)e−st dt.
0
The RoC for a function x satisfying Π− x = 0 is then always a right half plane, and thus to
the right of the rightmost pole of X = L(x).

R∞
Consider now the transform of an impulse at 0. The integral 0 δ(t)e−st dt is ambiguous
- it could have any value from 0 to 1, as is easily shown by taking different sequences of
asymmetric block functions to define the Dirac delta in the limit. However if we were to
define the unilateral Laplace transform as the limit (for ǫ > 0)
Z ∞
lim δ(t)e−st dt,
>
ǫ→0 −ǫ

then no matter which sequence of blocks we take to characterize the delta function, we’ll get
Z ∞
lim δ(t)e−st dt = 1,
>
ǫ→0 −ǫ

since eventually, we integrate over the entire pulse. The delta is always included in the
integration interval. Such an integral will be denoted in a simpler way, using the 0− notion,
by Z ∞
δ(t)e−st dt,
0−

This motivates then the introduction of the L− (read: Laplace-minus) transform.


Z ∞
L− {x(t)} = x(t)e−st dt.
0−

We will still write the lower bound as 0 if no ambiguity is possible.

We can also take the Laplace L− -transform of higher singularities, using integration by
parts, or the derivative property (see further). The Laplace transform of the n-th derivative
of δ is sn , and the RoC is the entire complex plane. (Explain this.)
Erik I. Verriest Chapter 3: Laplace Transform 64

3.6.1 Time-projection operators revisited


Let E be the class of signals modeled by generalized functions, i.e., piecewise smooth functions
having singularities in at most a finite number of points in each finite interval. For any x ∈ E,
let π0 x denote the singular or impulsive part at t = 0. If x is regular at zero, then π0 x = 0.
We introduced in Chapter 1 the time projection operators Π− and Π+ for regular func-
tions. These were defined using the evaluation operator σt . When working in the class of
generalized functions we need to extend this definition, as evaluation of singular functions
such as δ at zero is not well defined. (We could attribute the value ∞, but that is really not
a number). Besides, how are δ and δ̇ then distinguished? For this we introduce the operator
π0 , which operating on a generalized function, x, only keeps its singular content (in this case
also the singular part) of x at t = 0. Now define Π+ and Π− respectively as

Π+ x = MH x (3.1)
Π− x = MRH x (3.2)

Hence Π− x and Π+ x also contains the singularities appearing respectively for t < 0 and
t > 0. The singularities at t = 0 are ‘filtered out’ by π0 , the value of x at a single point being
immaterial, as we are defining signals as equivalence classes of functions. Note that we now
have a decomposition for generalized functions

x = Π− x + π0 x + Π+ x

and since this holds for all x ∈ E, we have the identity

I = Π− + π0 + Π+ . (3.3)

Example 1: The generalized function x(t) = t + δ(t) + u−1 (t) + δ(t − 1) has the parts

(Π− x)(t) = tu−1 (−t)


(Π+ x)(t) = (t + 1)u−1(t) + δ(t − 1)
(π0 x)(t) = δ(t)

Problems
Erik I. Verriest Chapter 3: Laplace Transform 65

1. Show that the composite operator, πa = Ta π0 T−a , leaves the singular part of x at
t = a, but ‘filters out’ everything else.

2. Given an arbitrary x ∈ E, denote by Tx the set of points where x is singular. Show


that for an arbitrary x ∈ E, the singular part of x, denoted xsing is expressed as
X
xsing = Tti π0 T−ti x.
ti ∈Tx

3. Determine the singular part of x in Example 1.


For the signal x in Example 1, Sx = δ + T1 δ.

4. Show that the operator S : E → E defined by


Z ∞
S(x) = Tτ π0 T−τ x dτ
−∞

is a projection operator (i.e. satisfies S 2 = S), and is such that x − Sx is regular.

5. Define a useful equivalence relation of E, so that two generalized functions that are
equivalent “behave the same way for all practical purposes”.
Two signals, x, y ∈ E are equivalent if their singularities are identical, i.e., Sx = Sy,
and their regular parts, (x − Sx) and (y − Sy), are equivalent in the set of piecewise
smooth functions P, i.e., for almost all t, x(t) = y(t). So, x(t) = t + δ(t − 1) is not
equivalent to y(t) = t, but x is equivalent to z defined as


 2 if t = −1
z(t) = 1 if t=0

t + δ(t − 1) else

Note that the projectors Π± and π0 obey the following commutation rules with D:

π0 D = Dπ0 + δ · ∆0 (3.4)
Π+ D = DΠ+ − δ · σ0+ (3.5)
Π− D = DΠ− + δ · σ0− , (3.6)
Erik I. Verriest Chapter 3: Laplace Transform 66

where we introduced the jump-functional ∆0 = σ0+ − σ0− . The unilateral Laplace transform
L− of x, denoted by X− = L− x, was introduced earlier
Z ∞
X− (s) = x(t)e−st dt,
0−

It can now be expressed as


Z ∞
X− (s) = [(π0 + Π+ )x](t)e−st dt.
−∞

and is thus the bilateral Laplace transform of the signal (π0 + Π+ )x. Hence,

L− = L(π0 + Π+ ). (3.7)

This simple, but important identity enables us to deduce all properties for the unilateral
transform from the bilateral one. For instance:

i) Delay (α > 0):


We have (suppressing the independent variable s on the Laplace transform)

L− Tα x = L(π0 + Π+ )Tα x
= Lπ0 Tα x + LΠ+ Tα x
= LTα π0 x + LTα Π+ x
= LTα (π0 + Π+ )x
= e−sα L(π0 + Π+ )x
= e−sα L− x

The RoC remains unchanged, because the second term is bounded for all s. The
appropriate commutative diagram is
α T
f −→ Tα f
L− ↓ L− ↓
e−sα
F− (s) −→ e−sα F− (s)
ℜs > σ ℜs > σ

and is similar to the one obtained for the bilateral Laplace transform.
Erik I. Verriest Chapter 3: Laplace Transform 67

ii) Advance (α > 0): We assume that Π− x = 0, i.e., the original x is one sided.
Note that for α > 0, the commutation Π+ T−α = T−α Π+ no longer holds. Check it out!
Suppressing again the independent variable s on the Laplace transform, we proceed as
follows:

L− T−α x = L(π0 + Π+ )T−α x


= L(I − Π− )T−α x
= LT−α x − LΠ− T−α x
= esα Lx − LΠ− T−α x
= esα L(π0 + Π+ )x + esα LΠ− x − LΠ− T−α x
Z 0− Z 0−
−st

= e L− x + e sα
x(t)e dt − x(t + α)e−st dt
Z−∞
α−
−∞

= esα L− x − esα x(t)e−st dt


0−

The RoC remains unchanged. Note that the presence of the finite integral in the
formula destroys the commutative diagram we had for the bilateral Laplace transform.

iii) Differentiation:
Again suppressing the independent variable s on the Laplace transform, we have

L− Dx = L(π0 + Π+ )Dx
= Lπ0 Dx + LΠ+ Dx
= L[Dπ0 + δ · (σ0+ − σ0− )]x + L[DΠ+ − δ · σ0+ ]x
= LD(π0 + Π+ )x − L[δ · σ0− ]x
= sL(π0 + Π+ )x − σ0− (x)L[δ]
= QL− x − σ0− (x)
= QL− x − x(0− ).

Note that the Q operates in the Laplace domain, where the independent variable is s.
The RoC remains at least as large. The appropriate commutation rule is thus

L− D = Q L− − σ0− (3.8)
Erik I. Verriest Chapter 3: Laplace Transform 68

or,
L− (ẋ) = s L− (x) − x(0−). (3.9)
The presence of the x(0−) breaks the commutative diagram down. This is traced back
to the lack of commutativity because

Π+ D = DΠ+ − δ · σ0+ .

The properties of modulation and scaling (by a positive constant) remain as for the
bilateral transform. Check it out!
Note that the Laplace transform of a singularity at t0 > 0 is a polynomial in s multiplied
by the exponential e−st0 .

3.7 Pets Transform to Rats


The unilateral Laplace transforms of all PET functions (restricted to positive time) are
rational functions in s, whence the mnemonic ‘RAT-S’. This is readily seen from the fact that
for every PET function, x, there exists a differential polynomial a(D) such that a(D)x = 0.
Consequently, L− a(D)x = 0. If the highest order term in a(D) is Dn , then, by the previous,

[L− Dn x](s) = sn [L− x](s) − αn−1 (s)

where αn−1 (s) is a polynomial in s of degree at most n − 1. Its coefficients depend on the
values of x(0−), ẋ(0−), . . . , x(n−1) (0−). Thus also,

[L− a(D)x](s) = a(s)[L− x](s) − bn−1 (s)

for some polynomial, bn−1 , of degree at most n − 1. Hence, the unilateral transform of x is

bn−1 (s)
X− (s) = [L− x](s) = .
a(s)

In fact, these transforms will be strictly proper (degree of numerator is strictly less than the
degree of the denominator) rational functions. As seen in the previous section, any singularity
at t = 0, say c = N
P PN i
i=0 ci ui , has the transform Lc = L− c = i=0 ci s , which is polynomial.
Erik I. Verriest Chapter 3: Laplace Transform 69

b(s)
Consequently, if X(s) is a nonproper rational function, a(s) , with deg b(s) ≥ deg a(s), it can
be decomposed, using the polynomial division algorithm,

b(s) = P (s)a(s) + r(s), with deg r(s) < deg a(s),


r(s)
into a polynomial part, P (s), and a strictly proper fractional part, Xsp = a(s)
.

r(s)
X(s) = + P (s)
a(s)
corresponding to the time domain

P (s) = L− π0 x (3.10)
Xsp (s) = L− Π+ x. (3.11)
s2 +1
Example 2: Consider the unilateral Laplace transform X(s) = s+1
, with RoC ℜs > −1.
The division algorithm gives

s2 + 1 = s2 + s − s + 1 = s(s + 1) − s + 1 = s(s + 1) − (s + 1) + 2 = (s − 1)(s + 1) + 2.

and thus
2
X(s) = s − 1 +
s+1
The inverse Laplace transform gives the corresponding generalized function in the time
domain:
x(t) = δ̇ − δ + 2e−t u−1 (t).

3.8 Inverse Laplace Transform


To retrieve the time function corresponding to (G(s), RoC), use again the Fourier transform
relationship. Z ∞
−σt −1 1
e g(t) = F {G(s)|s=σ+jω } = ejωt G(σ + jω) dω
2π −∞
Hence
σ+j∞ σ+j∞
1 1
Z Z
σt+jωt
g(t) = e G(σ + jω) d(σ + jω) = est G(s) ds
2πj σ−j∞ 2πj σ−j∞
Erik I. Verriest Chapter 3: Laplace Transform 70

This is a line integral in the complex plane, evaluated along a vertical line in the region of
convergence.

The general formula for the inverse transform of the bilateral Laplace transform remains
valid in the case of the unilateral transform. Thus, if F = L− f , then for all t
Z σ+j∞
1
f (t) = est F (s) ds, (3.12)
2πj σ−j∞
with integration along a vertical line in the region of convergence. It does turn out that for
t < 0, f (t) = 0 follows. Note that if the unilateral Laplace transform converges for some
ℜs = σ, then it converges for all s such that ℜs ≥ σ Consequently, the RoC for the unilateral
Laplace transform is always a ‘right half plane’ ℜs > σ0 for some minimal σ0 , called the
abscissa of convergence. We shall discuss later how line integrals in the complex plane
may be computed.

Fortunately, for rational functions X(s), there is a simpler way to determine the corre-
sponding time function. Indeed, consider first the case where X(s) = Xp (s) is strictly proper,
having only simple poles (i.e., the zeros of the denominator polynomial have all multiplicity
one). In this case, partial fraction expansion gives
X Ai
Xp (s) =
i
s − pi
corresponding to " # " #
X X
x(t) = Π+ Ai epi t = Ai epi t u−1 (t).
i i
Incidently, given Xp (s), if the pole pi is known, the corresponding coefficient Ai is given by
the residue (I’ll explain later what this means)
def
Ai = Res [Xsp (s), pi ] = [(s − pi )Xsp (s)]s=pi .

You already know that any polynomial in s corresponds to singular stuff (the sparks!)
at t = 0. So we only need to worry about the higher multiplicity terms. But this follows
directly from
1
L− [u−k ] = k ,
s
Erik I. Verriest Chapter 3: Laplace Transform 71

and the (exponential) modulation property

L− [Mexp(−pt) x] = Tp L− x.

The Tp on the right hand side works on functions in the Laplace domain (s-domain).

Example:

s2 (s2 + 2s + 1) − 2s − 1
   
L−1
− = L−1

(s + 1)2 (s + 1)2
 
−1 2(s + 1) − 1
= L− 1 −
(s + 1)2
 
−1 2 1
= L− 1 − +
s + 1 (s + 1)2
 
−t −1 −2 1
= δ(t) + e L− + 2
s s
−t
= δ(t) + e [−2u−1 (t) + u−2 (t)]
= δ(t) + (t − 2)e−t u−1 (t).

3.9 Operations on Signals and Their L−-Transforms


The correspondences between one-sided functions and their unilateral Laplace transforms
can be found in a direct way, without the use of the bilateral transform: First use the defini-
tion of L− , then substitute the definition of the transformed function (in terms of the original
one). Finally try to mold the obtained form using mathematical tricks such as substitution,
elimination, integration by parts, change of variables, interchange order of integration, etc.,
just as we did in deriving the properties of the bilateral transform. Note that the determi-
nation of the RoC must not be overlooked.
The use of commutative diagrams is very helpful here, just as it was for the bilateral trans-
form.
Erik I. Verriest Chapter 3: Laplace Transform 72

i) Delay (α > 0)

α T
f −→ Tα f
L− ↓ L− ↓
e−sα
F (s) −→ e−sα F (s)
ℜs > σ ℜs > σ

ii) Advance (α > 0)

T−α
f −→ Π+ T−α f
L− ↓ L− ↓

F (s) −→ e [F (s) − 0 f (t)e−st dt]

ℜs > σ ℜs > σ

iii) Multiplication by t

Q
f −→ Qf
L− ↓ L− ↓
d

ds
F (s) −→ −F ′ (s)
ℜs > σ ℜs > σ

iv) Scaling (α > 0)

α S
f −→ Sα f
L− ↓ L− ↓
1
S
α 1
1 s

F (s) −→α α
F α
ℜs > σ ℜs > ασ

v) Differentiation

D
f −→ f′
L− ↓ L− ↓

F (s) −→ sF (s) − f (0− )


ℜs > σ ⊂ ℜs > σ
Erik I. Verriest Chapter 3: Laplace Transform 73

The example et u−1 (t) illustrates how a wrong solution can easily slip in if you are not
careful. In fact, two errors can cancel each other, and thus one may find consistent but
wrong solutions.
1
Here, the derivative of et u−1(t) is et u−1 (t) + δ(t), which has Laplace transform s−1 + 1. On
the other hand, the Laplace transform of the derivative, using the derivative property, is
1
s s+1 − 0, since 0, and not 1, is the correct value of the given function at t = 0− . These
results are consistent and correct. Note again that it requires defining the Laplace transform
carefully as the integral from 0− to infinity, in order to capture the delta function.

vi) Integration
R
Rt
f −→ 0
f (τ ) dτ
L ↓ L ↓
1
s F (s)
F (s) −→ s
{s|ℜs > σ} {s|ℜs > σ} ∩ {s|ℜs > 0}

vii) Convolution
∗g
f −→ f ∗g
L ↓ L ↓
G(s)
F (s) −→ F (s)G(s)
{s|ℜs > σ} {s|ℜs > σ} ∩ RoC (f )

By concatenating these elementary operations on simple exponentials (including the com-


plex exponentials) one can obtain the Laplace transform of not so elementary functions. For
most practical purposes, it again suffices to remember just one transform pair: for arbitrary
a∈C
1
eat u−1 (t) ↔ ,
s−a
with RoC ℜs > ℜa. So powerful are these relations!
Erik I. Verriest Chapter 3: Laplace Transform 74

Application 1: Initial Value Problems - Transients


If, for t > 0, a(D)y = b(D)u, with Π− u = 0 and Π− y = 0 (the “at rest” condition), then
a(D) b(D)
y −→ a(D)y = b(D)u ← u
L− ↓ L− ↓ L− ↓ L− ↓
a(s) b(s)
F (s) −→ a(s)Y (s) = b(s)U(s) ← U(s)
What can be said about the region of convergence of Y ?
Note that this gives the same answer as we would have obtained by the bilateral transform
since we assumed the “at rest” condition.
The following example illustrates a more interesting case. Let a(D) = D + 3 and b(D) = 2,
and assume that y(0−) = 5. Apply a step u = u−1 to the system. We find now by Laplace
transforming
L− {Dy} + L− {3y} = L− {2u−1 }
2
sY (s) − y(0−) + 3Y (s) =
s
5s + 2
Y (s) =
s(s + 3)
2/3 13/3
= +
s s + 3
2 13 −3t
y(t) = + e u−1(t).
3 3
Note that 2/3 is the final value (see further) and 13/3e−3t for t > 0 is the transient response.

Application 2 : Semi-periodic functions


By a semi-periodic function, we mean a one sided function of the form p(t) = ∞
P
k=0 f (t−kT ),
where f (t) is a function which is zero outside the interval [0, T ). This function is called the
elementary pulse, and p is also referred to as the repetition of f , also denoted Repf , as it is
a (composite) operation on f . Note that this does not mean that f must be nonzero in the
entire interval [0, T ). If ℜs > 0, then
Rep
f −→ Repf
L− ↓ L− ↓
1
1−e−T s F (s)
F (s) −→ 1−e−T s
Erik I. Verriest Chapter 3: Laplace Transform 75

Note that for any Dirichlet type function with compact support (i.e., there is a finite
interval, outside of which the function is zero) the RoC is the entire complex plane. There-
fore the repetition of such a function, a semi-periodic function will always have a Laplace
transform that converges in ℜs > 0. In contrast, a truly periodic function has a (bilateral)
Laplace transform that at most converges on the imaginary axis. In this case the Laplace
transform is indeed the Fourier transform if one identifies s with jω. Recall that the Fourier
transform of a periodic function is necessarily a generalized function, i.e., consists of impulses
(in the ω domain)!

3.10 Initial Value and Final Value Theorem


Both the initial value and the final value for a time function can be obtained if its Laplace
transform is known, without explicit inversion of the transform. This is useful in many
practical situations.
We restrict the discussion to “Pet”- functions, whose transforms are “Rats,” allowing also
singularities at the origin.

3.10.1 Initial Value Theorem


Let f (t) have a rational Laplace transform, F (s), which may not be strictly proper. Then
it holds that
f (0+ ) = lim f (t) = lim s[F (s)]sp . (3.13)
>
t→0 s→∞

where [F (s)]sp is the strictly proper part of F (s). Note that the initial value theorem always
gives the value to the right of a possible jump. This is of interest if an impulsive input is
applied to the system, whose natural initial conditions are the ones before the impulsive input
was applied. The initial value theorem allows then the computation of the instantaneous
jump in the signal.
The proof follows from the fact that only a term of the form As has anything (A) to contribute
to the right hand side of (3.13).
For instance, consider F (s) = 1/s. We know that this corresponds in the time domain
with f (t) = u−1(t), and thus f (0− ) = 0 and f (0+ ) = 1. Using the final value theorem, we
Erik I. Verriest Chapter 3: Laplace Transform 76

indeed obtain correctly  


1 1
lim s = lim s = 1.
s→∞ s sp s→∞ s
Proof: Start with the decomposition

X(s) = Xsp (s) + P (s),

where P is a polynomial, and Xsp is the strictly proper part of X. We know that P (s) is the
transform of a generalized function, p(t), which is a linear combination of an impulse and
its derivatives at 0, whereas the time function, xsp (t), corresponding to Xsp (s) has at most
a discontinuity at 0, but is otherwise smooth. Note that all the singular stuff ends up with
value 0 at t = 0+. Hence x(0+) = xsp (0+). From the derivative theorem we get
Z ∞
lim sXsp (s) − xsp (0−) = lim ẋsp (t)e−st dt.
s→∞ s→∞ 0−

The right hand side decomposes to


Z 0+ Z ∞  Z 0+ Z ∞
−st −st
lim ẋsp (t)e dt + ẋsp (t)e dt = ẋsp (t) dt + ẋsp (t)( lim e−st ) dt
s→∞ 0− 0+ 0− 0+ s→∞

= ∆0 xsp .

Hence we get lims→∞ s[X(s)]sp − x(0−) = x(0+) − x(0−), thus proving the IVT. ✷

Note that nothing in the proof really assumed that x is a PET function. So let’s check a
more general case. Assume that f is a PET function, with L− transform F (s), and consider
f+ = f H and its shifted version fτ (t) = f+ (t − τ ). For τ > 0, fτ has Laplace transform
e−sτ F (s). Now
lim e−sτ sF (s) = 0
s→∞
since s → ∞ must be along ℜs → ∞, and consequently the factor e−sτ → 0. This gives the
correct results, since f+ (t − τ ) is zero for t ∈ [0, τ ).

If we shift f+ to the left by τ , i.e., consider f−τ (t) = f+ (t + τ ), then the unilateral
transform of f−τ is the same as the unilateral transform of f−τ H. By the left shift property,
this transform is  Z τ 

esτ F (s) − f (t)e−st dt .


0−
Erik I. Verriest Chapter 3: Laplace Transform 77

The initial value theorem then states that


 Z τ− 
sτ −st
f−τ (0+) = lim e sF (s) − s f (t)e dt .
s→∞ 0−

The bracketed term remains bounded in the limit, but now esτ → ∞, which seems puzzling.
However, a careful analysis correctly gives
 Z τ−  Z ∞ Z ∞
−st −s(t−τ )
esτ
F (s) − f (t)e dt = f (t)e dt = f (t + τ )e−sτ dτ.
0− τ− 0−

This is the L− transform of f−τ , and the initial value theorem yields f (τ +).

3.10.2 Final Value Theorem

f (∞) = lim f (t) = lim sF (s) (3.14)


t→∞ s→0

provided that f (t) is such that f (t) converges to a constant, thus, no sustained oscillation
and no divergence, neither oscillatory nor non-oscillatory. Equivalently, this means that F (s)
has all its poles in the open left hand plane with perhaps a single pole at 0. Still another
way to express this condition is: 0 ∈ RoC (sF (s)).
If we apply the theorem to the example in Application 2, we get
 
5s + 2 5s + 2 2
y(∞) = lim[sY (s)] = lim s = lim = ,
s→0 s→0 s(s + 3) p s→0 s+3 3

which is consistent with what we found there.

Proof: We again start from


Z ∞
sX(s) − x(0−) = ẋ(t)e−st dt.
0−

Taking the limit for s → 0 gives


Z ∞
lim sX(s) − x(0−) = ẋ(t) dt = x(∞) − x(0−).
s→0 0−
Erik I. Verriest Chapter 3: Laplace Transform 78

Provided the integral on the right converges (for which it is necessary that 0 is in the RoC).
The FVT then follows. ✷

Final Behavior Theorem:2


This is a generalization of the final value theorem.
Here we want to determine the behavior of the function x(t) as t → ∞, based only on
knowledge of its Laplace transform X(s), thus without explicitly inverting it.
This can be done based on the fact that when we have two exponential functions, eat and
ebt with a > b, then writing x(t) = Aeat + Bebt = Aeat (1 + B A
e(b−a)t ), and noting that the
function e(b−a)t decays exponentially (in an oscillatory fashion if a and/or b are complex), we
see that x(t) behaves as Aeat in the long run. Now Aeat , is the time function corresponding
A
to the term s−a in the partial fraction expansion of [X(s)]p . Hence A is determined by

A = lim[(s − a)Xp (s)],


>
s→a

and a is the rightmost pole of X(s). But note that we have assumed that this pole is real
and has multiplicity one.

If the rightmost pole has higher multiplicity, say the partial fraction expansion has the
compound term
m
X Ai
,
i=1
(s − a)i
with Am 6= 0, then the time function corresponding to this is the polynomial
m
X
Ai (i − 1)! ti−1 eat .
i=1

Clearly, the dominant term is Am tm−1 eat , Hence, we obtain the exact dominant behavior
from knowledge of the rightmost pole and its multiplicity. The coefficient can be derived
from Xp (s) as follows:
Am = lim[(s − a)m Xp (s)].
>
s→a

2
This can be omitted on a first reading
Erik I. Verriest Chapter 3: Laplace Transform 79

How should one proceed if the dominant poles (the rightmost poles) are complex conjugate,
say α ± jβ? We shall defer this to the problems, only considering for now the special case
of the final behavior of an asymptotically stable system (see Chapter 4) with a sinusoidal
input.

Periodic Steady State


If the driving input to a system is purely sinusoidal, then in the long run, at least if the
RoC-condition is satisfied, (we’ll see in the next chapter that it relates to the stability) the
system will have a sinusoidal output after the transient dies out. The idea is to compute this
“periodic steady state” again without the explicit computation (Laplace transform inversion)
of the full response.
The following can be established: If H(s) is a rational transfer function of an asymptotically
stable system, with input cos(ω0 t), then since
s
L− cos ω0 t = ,
s2 + ω02
s
the Laplace transform of the output is Y (s) = H(s) s2 +ω 2 . Since it is assumed that H is
0
asymptotically stable, its rightmost poles are the ones on the imaginary axis ±jω0 due to
the input factor. Thus, the partial fraction expansion of Y (s) contains the compound term
Bs + Cω0
Y (s) = ,
s2 + ω02
which corresponds to the final behavior we need to find. In the time domain, this corresponds
to

B cos ω0 t + C sin ω0 t = B 2 + C 2 cos(ω0 t − φ).

We note that the amplitude is B 2 + C 2 and the phase lag is defined by
√ √
cos φ = B/ B 2 + C 2 , sin φ = C/ B 2 + C 2 .

But how can we determine this from knowledge of Y (s) only? Note that

s2 + ω02
 
Yp (s) = B − jC.
s s=jω0
Erik I. Verriest Chapter 3: Laplace Transform 80

Thus the amplitude of the sustained oscillation will be given by the modulus

s2 + ω02
Yp (s) ,
s s=jω0

and the phase lag by


s2 + ω02
 
φ = arg Yp (s) .
s s=jω0
Bibliography

[1] Scott, E.J., Transform Calculus with an Introduction to Complex Variables, Harper,
1955.

[2] LePage, Wilbur R., Complex Variables and the Laplace Transform for Engineers, Dover,
1961.

[3] Needham, Tristan, Visual Complex Analysis Oxford University Press, 2000.

[4] Tsui, M, Potential Theory in Modern Function Theory , Maruzen Co., Ltd, 1959.

[5] Silverman, Richard A. Introductory Complex Analysis, Dover, 1972.

[6] Silverman, Richard A. Complex Analysis with Applications, Dover 1984.

[7] Flanigan, Francis J., Complex Variables, Harmonic and Analytic Functions, Dover 1983.

[8] Nehari, Zeev, Conformal Mapping, McGraw-Hill, 1952.

[9] Krylov, V.I., and Skoblya, N.S., A Handbook of Approximate Fourier Transformation
and Inversion of the Laplace Transform, Mir 1977

[10] Schiff, Joel L., The Laplace Transform: Theory and Applications, Springer-Verlag, 1999.

[11] Noble, B., Methods based on the Wiener-Hoph Technique, Chelsea, 1988.

[12] Widder, David Vernon, The Laplace Transform, 5-th edition, Princeton University
Press, 1959.

81

You might also like