Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
35 views5 pages

Stochastic Calculus

Uploaded by

naoufelito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views5 pages

Stochastic Calculus

Uploaded by

naoufelito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Advanced Derivatives Modelling:

Stochastic calculus toolkit

January 23, 2015

1 Expectation and conditional expectation operators

1.1 Properties of the expectation operator

Proposition 1.1 (Linearity). If α and β are real constants and X and Y are integrable
random variables, then

(1.1) E [αX + βY ] = αE [X] + βE [Y ] .

Proposition 1.2 (Jensen’s inequality). If φ is a convex, real-valued function defined on


R, and if X is an integrable random variable, then

(1.2) φ (E [X]) ≤ E [φ (X)] .

1.2 Properties of conditional expectations

We use the notion of filtration of information, where the space of events determined at
time t is denoted Ft . We thus have an increasing filtration of information indexed by time:

(1.3) F0 ⊂ Ft1 ⊂ · · · ⊂ . . .

The linearity and Jensen’s inequality properties are inherited by expectations conditioned
on a sigma-algebra. We recall three additional properties that are useful. First, condition-
ing on no information we obviously recover the unconditional expectation:

(1.4) E [X|F0 ] = E [X] .

1
Second, if we estimate a random variable based on an information set and then try to
estimate the estimate itself based on a smaller information set we get the same estimate
we would have got if we directly estimated the random variable based on the smaller set:
Proposition 1.3 (Tower Law). If X is integrable, then for any s < t
(1.5) E [E [X|Ft ] |Fs ] = E [X|Fs ]

Finally, if we condition on information that is independent of the random variable then we


get the same value as with the unconditional expectation:
Proposition 1.4 (Independence). If X is integrable and independent of F, then
(1.6) E [X|F] = E [X]

2 Martingale

Definition 2.1 (Martingale). A stochastic process (Xt )(t≥0) is a martingale with respect
to a measure P and a filtration (Ft )(t≥0) if
(2.1) EP [Xt |Fs ] = Xs , ∀t ≥ s.

3 Brownian motion

Definition 3.1 (Brownian motion). Let a stochastic process (Bt )(t≥0) be a collection of
random variables indexed by time t satisfying B0 = 0. Then the process (Bt )(t≥0) is a
Brownian motion if for all sequences of times t0 = 0 ≤ t1 ≤ · · · ≤ tm the increments
Bt1 − B0 , Bt2 − Bt1 , . . . , Btm − Btm−1
are independent and normally distributed with moments:
 
(3.1) E Bti+1 − Bti = 0,
 
(3.2) Var Bti+1 − Bti = ti+1 − ti

Using the independence of the increments and the linearity of the expectation operator,
we can deduce
(3.3) E [Bs Bt ] = s ∧ t.

A Brownian motion is a martingale with respect to its filtration (Ft )(t≥0) , i.e. for any
0 ≤ s ≤ t:
(3.4) E [Bt |Fs ] = Bs .

2
4 Ito calculus

4.1 Ito’s integral

Ito’s integral is defined for integrands that are adapted stochastic processes. It defines the
meaning of stochastic integrals with respect to a the measure of the Brownian motion:
Z t
(4.1) I(t) = f (u)dBu
0

"Obvious" properties of Ito integrals are path continuity (as a function of t) and linearity:
Proposition 4.1 (Linearity). For any constants α and β and adapted processes f (t) and
g(t)
Z t Z t Z t
(4.2) α f (u)dBu + β g(u)dBu = (αf (u) + βg(u)) dBu .
0 0 0

We recall three more important properties. First, Ito’s integral is a martingale


Proposition 4.2 (Martingale). For any adapted process integrable process f (u) and hori-
zon time t, the stochastic integral Z t
f (u)dBu
0
is a martingale.
By implication we have
Z t 
(4.3) E f (u)dBu
= 0
0
Z t  Z s
(4.4) E f (u)dBu |Fs = f (u)dBu
0 0

In addition, the quadratic variation of the stochastic integral is given by


Proposition 4.3 (Quadratic variation of Ito’s integral). The quadratic variation of the
stochastic integral up to a time t is given by
Z t Z t  Z t
(4.5) f (u)dBu , f (u)dBu = f (u)2 du .
0 0 0

Finally, the variance of the stochastic integral is given by the expectation of the quadratic
variation:

3
Proposition 4.4 (Ito isometry). The Ito integral satisfies
"Z 2 #
t Z t
(4.6) E f (u)dBu = f (u)2 du
0 0
"Z 2 #
t Z s
(4.7) E f (u)dBu |Fs = f (u)2 du
0 0

Furthermore, in the case of deterministic integrands, i.e. functions of time only, the stochas-
tic integral is normally distributed.

4.2 Ito processes

Definition 4.1 (Ito process). Let (Bt )(t≥0) be a Brownian motion process with associ-
ated filtration (Ft )(t≥0) . If µ(u) and σ(u) are adapted stochastic processes satisfying the
integrability conditions:
Z t
(4.8) |µ(u)|du < ∞, ∀t > 0
0
Z t 
2
(4.9) E σ(u) du < ∞, ∀t > 0
0
then a stochastic process (Xt )(t≥0) starting form a know value X0 and of the form
Z t Z t
(4.10) Xt = X0 + µ(u)du + σ(u)dBu
0 0
is called an Ito process.

4.3 Ito’s lemma

Proposition 4.5 (Ito’s formula). Let (Xt )(t≥0) denote an Ito process. If f (t, x) is function
with continuous partial derivatives ft (t, x), fx (t, x) and fxx (t, x), then
Z T Z T
1 T
Z
f (T, XT ) − f (0, X0 ) = ft (t, Xt )dt + fx (t, Xt )dXt + fxx (t, Xt )d[X, X]t
0 0 2 0
Z T  Z T
1 2
(4.11) = ft (t, Xt ) + fx (t, Xt )µ(t) + fxx (t, Xt )σ(t) dt + fx (t, Xt )dBt
0 2 0

We often use the differential notation of Ito’s formula:


1
df (t, Xt ) = ft (t, Xt )dt + fx (t, Xt )dXt + fxx (t, Xt )dXt dXt
 2 
1 2
(4.12) = ft (t, Xt ) + fx (t, Xt )µ(t) + fxx (t, Xt )σ(t) dt + fx (t, Xt )dBt
2

4
5 Girsanov theorem

Definition 5.1 (Radon-Nikodym derivative process). Let P be a probability measure and


(Ft )(t≥0) a filtration. If Z is an almost surely positive random variable satisfying E [Z] = 1
then the martingale process
(5.1) Zt = E [Z|Ft ] , ∀0 ≤ t ≤ T
define a Radon-Nikodym derivative process.
The Radon-Nikodym derivative process permits changes of probability measures techniques
due to two important properties that we recall below.
Proposition 5.1 (Probability measure induced by Radon-Nykodim derivative). Let (Zt )(t≥0)
a Radon-Nykodim process. We then have a new probability measure P̃ defined by
Z
(5.2) P̃(A) = ZdP ∀ ∈ F
A

If Y is an Ft −measurable random variable, then for any t ≤ T


(5.3) EP̃ [Y ] = EP [Y Zt ]
1
(5.4) EP̃ [Y |Fs ] = EP [Y Zt |Fs ] .
Zs
Theorem 5.1 (Girsanov’s theorem). Let (Bt )(t≥0) be a Brownian motion with respect to
a probability measure P and (Ft )(t≥0) its associated filtration. If (θt )(t≥0) is an adapted
process satisfying the condition
Z T
(5.5) E θu2 Zu2 du < ∞,
0

then the exponential martingale process (Zt )(t≥0) given by


 Z t
1 t 2
Z 
(5.6) Zt = exp − θu dBu − θ du
0 2 0 u

is a Radon-Nykodym process. Under the probability measure P̃ obtained by the change of


measure

(5.7) = Zt ,
P
the process (B̃t )(t≥0) defined by
Z t
(5.8) B̃t = Bt + θu du,
0

is a Brownian motion under the probability measure P̃.

You might also like