ACTL3301/5301 Quantitative Risk Management
Week 2: Multivariate Models (i)
▶ Basics of Multivariate Modelling
▶ Probabilistic Properties
▶ Estimation
▶ Multivariate Normal Distributions
▶ Normal Variance Mixtures
▶ Normal Mean-Variance Mixtures
Version 2024. Copyright UNSW School of Risk and Actuarial Studies
Reading: McNeil et al (2015), Sections 6.1-6.3
1 / 27
Multivariate Models
▶ Market, credit, insurance, and operational risks are influenced by
multiple factors, and dependence between these risk factors is
important.
▶ Multivariate models are required for risk measurement and
management.
▶ A standard model has been the multivariate normal model, but it
is inadequate for real world risks.
▶ Extensions to the multivariate normal such as elliptical
distributions are important modeling tools, which we will
consider here.
2 / 27
Notation and Definitions
▶ d-dimensional random vector: X = (X1 , . . . , Xd )⊺
▶ Joint distribution function is
FX (x1 , . . . , xd ) = P (X1 ≤ x1 , . . . , Xd ≤ xd ) ,
which we rewrite succinctly as
FX (x) = P (X ≤ x) .
▶ Marginal distribution function of Xi is
Fi (xi ) = P (Xi ≤ xi ) = FX (∞, . . . , xi , . . . , ∞) .
▶ Joint distribution function of X is absolutely continuous if there
is some non-negative function fX , called joint density, such that
Z x1 Z xd
FX (x1 , . . . , xd ) = ··· fX (u1 , . . . , ud ) du1 · · · dud .
−∞ −∞
3 / 27
Notation and Definitions
▶ The survival function of X is
F X (x1 , . . . , xd ) = P (X1 > x1 , . . . , Xd > xd ) ,
which we rewrite succinctly as
F X (x) = P (X > x) .
▶ Each marginal survival function is
F i (xi ) = P (Xi > xi ) = F X (−∞, . . . , xi , . . . , −∞)
4 / 27
Notation and Definitions
Consider two vectors X = (X1 , . . . , Xd )⊺ and Y = (Y1 , . . . , Yn )⊺ .
▶ The conditional distribution of Y given X = x has density
f (x, y)
fY|X (y|x) =
fX (x)
with distribution function
Z y Z y1 Z yn
f (x, z)
FY|X (y|x) = fY|X (z|x) dz = ··· dz.
z1 =−∞ zn =−∞ fX (x)
▶ The two vectors X and Y are independent if and only if the joint
distribution factorizes such that
F (x, y) = FX (x) FY (y) .
5 / 27
Moments
▶ Mean vector:
µ = E (X) = (E (X1 ) , . . . , E (Xd ))⊺
▶ Covariance matrix:
Σ = cov (X) = E ((X − E (X)) (X − E (X))⊺ )
with (i, j)th element
σij = cov (Xi , Xj ) = E (Xi Xj ) − E (Xi ) E (Xj ) .
Its diagonal elements are the variances σ11 , . . . , σdd .
▶ The correlation matrix ρ (X) is a d × d matrix with the (i, j)th
element given by
cov (Xi , Xj )
ρij = ρ (Xi , Xj ) = p ∈ [−1, 1].
var (Xi ) var (Xj )
6 / 27
Linear Operations
▶ For any matrix B ∈ Rk×d and vector b ∈ Rk
E (BX + b) = BE (X) + b
cov (BX + b) = Bcov (X) B⊺
▶ Covariance matrices are positive semi-definite.
Explain?
7 / 27
Cholesky Factorization
By Cholesky factorization, a symmetric positive-definite matrix Σ
can be factorized into
Σ = LL⊺
for a lower triangular matrix L with positive diagonal elements,
denoted by L = Σ1/2 .
8 / 27
Characteristic Function
▶ The characteristic function
⊺
ϕX (t) = E eit X , t ∈ Rd .
▶ The moment generating function
⊺
MX (t) = E et X , t ∈ Rd .
9 / 27
Estimation
▶ Assuming identically distributed observations that are serially
uncorrelated X(1) , . . . , X(n) with mean vector µ and finite
covariance matrix Σ.
▶ Sample mean vector X and sample covariance matrix S:
n n
1X 1 X ⊺
X= X(i) , S= X(i) − X X(i) − X .
n n−1
i=1 i=1
▶ Both X and S are unbiased.
▶ The sample correlation matrix R is a d × d matrix with its
(j, k)th element give by
sjk
rjk = √ .
sjj skk
10 / 27
Multivariate Normal Distributions
▶ Definition: A random vector X = (X1 , . . . , Xd )⊺ has a
multivariate normal or Gaussian distribution if
d
X = µ + AZ
where Z = (Z1 , . . . , Zk )⊺ is a vector of i.i.d. univariate standard
normal variables and A ∈ Rd×k and µ ∈ Rd are a matrix and
vector of constants.
▶ In the finance and insurance literature, it is commonly assumed
that the data are i.i.d. multivariate normal. Generally, this is a
poor assumption for financial and insurance data. Nevertheless,
this serves as a basis for more general models (e.g. by “mixing”).
11 / 27
▶ We have
E (X) = µ,
cov (X) = Σ =: AA⊺ ,
where Σ is positive semi-definite matrix.
12 / 27
▶ Characteristic function of standard univariate normal Z is
itZ 1 2
ϕZ (t) = E(e ) = exp − t ,
2
and characteristic function of X is
1
ϕX (t) = E (exp (it⊺ X)) = exp it⊺ µ − t⊺ Σt , t ∈ Rd .
2
▶ Standard notation is X ∼ Nd (µ, Σ)
▶ If Σ is diagonal, then the components of X are mutually
independent. (The reverse is also true.)
13 / 27
▶ For the case rank (A) = d ≤ k, the covariance matrix has full
rank d and is therefore invertible (non-singular) and positive
definite
▶ X has an absolutely continuous distribution function with joint
density
1 1 ⊺ −1
f (x) = exp − (x − µ) Σ (x − µ) ,
(2π)d/2 |Σ|1/2 2
where |Σ| is the determinant of Σ.
14 / 27
Contour
Points with equal density lie on ellipsoids determined by equations
(x − µ)⊺ Σ−1 (x − µ) = c for constants c > 0.
15 / 27
Simulation of Multivariate Normal
▶ Many sophisticated risk management and economic capital
model will require numerical simulation in implementation.
▶ Hence, the availability of simulation algorithms is essential.
▶ To generate a vector X with distribution Nd (µ, Σ),
▶ obtain Cholesky decomposition of Σ to obtain Σ1/2 ;
▶ generate a vector Z = (Z1 , . . . , Zd )⊺ of i.i.d. univariate standard
normal random variables;
▶ set
X = µ + Σ1/2 Z.
16 / 27
Linear Combinations of Multivariate Normal
▶ Linear combinations of multivariate normal vectors remain
multivariate normal (show using characteristic function).
▶ For X with distribution Nd (µ, Σ) and B ∈ Rk×d and b ∈ Rk ,
BX + b ∼ Nk (Bµ + b, BΣB⊺ ) .
17 / 27
Special case: For a ∈ Rd , we have
a⊺ X ∼ N (a⊺ µ, a⊺ Σa)
which is the basis for the “variance-covariance approach” to market
risk.
18 / 27
Multivariate Normal - Quadratic Forms
If X ∼ Nd (µ, Σ) with Σ positive definite, then
(X − µ)⊺ Σ−1 (X − µ) ∼ χ2d .
19 / 27
Normal Mixture Distributions
▶ The multivariate normal is popular due to its very nice
probabilistic properties.
▶ However, it is often deficient in practical applications as
illustrated in the case study, and often criticized as having a too
thin tail and being symmetric.
▶ Multivariate Normal Mixtures, as generalizations of the
multivariate normal, can partly remedy these deficiencies.
▶ A key idea is to introduce additional randomness in:
▶ the covariance matrix;
▶ both the mean vector and the covariance matrix.
20 / 27
Normal Variance Mixture Distribution
Definition: The random vector X is said to have a multivariate normal
variance mixture if √
d
X = µ+ WAZ
where:
▶ Z follows a k-dimensional standard normal distribution,
▶ W ≥ 0 is a non-negative, scalar valued random variable
independent of Z,
▶ µ ∈ Rd and A ∈ Rd×k are a vector and a matrix of constants,
respectively.
These are variance mixtures since (X|W = w) ∼ Nd (µ,wΣ) where
Σ = AA⊺ .
21 / 27
Moments
Provided W has finite expectation, we have:
√
E (X) = E µ+ WAZ
√
= µ+E W AE (Z)
= µ
and
√ √ ⊺
cov (X) = E WAZ WAZ
= E (W) AE (ZZ⊺ ) A⊺
= E (W) Σ.
22 / 27
Density
The density of X is
f (x)
Z ∞
= fX|W (x|w) dH (w)
0
∞
w−d/2 (x − µ)⊺ Σ−1 (x − µ)
Z
= exp − dH (w) ,
0 (2π)d/2 |Σ|1/2 2w
where H is the df of W.
23 / 27
Characteristic Function
Example: Show that the characteristic function of X is given by
⊺
⊺ ⊺ t Σt
E[eit X ] = eit µ Ĥ ,
2
R∞
where Ĥ(θ) = 0 e−θw dH(w). We write X ∼ Md µ, Σ, Ĥ .
24 / 27
Multivariate t Distribution as a Special Case
▶ If W has an inverse gamma distribution W ∼ IG 21 ν, 12 ν so that
ν 2
W ∼ χν , then X has a multivariate t distribution with ν degrees
of freedom so that X ∼ td (ν, µ, Σ).
▶ Since E (W) = ν−2ν ν
, we have cov (X) = ν−2 Σ, so that Σ is not
the covariance matrix of X. Note cov (X) is defined only if
ν > 2.
▶ Density:
− ν+d
ν+d
(x − µ)⊺ Σ−1 (x − µ)
Γ 2
2
f (x) = 1+
ν d/2
|Σ|1/2 ν
Γ 2 (πν)
25 / 27
Normal Mean-Variance Mixtures
Definition: The random vector X is said to have a (multivariate)
normal mean-variance mixture distribution if
d √
X = m(W) + WAZ
where:
▶ Z ∼ Nk (0, Ik ),
▶ W ≥ 0 is a non-negative, scalar valued random variable
independent of Z,
▶ m : [0, ∞) → Rd is a measurable function,
▶ A ∈ Rd×k is a matrix of constants.
26 / 27
Discussion
Assume m(W) = µ + Wγ. Find the mean vector and covariance
matrix of X.
27 / 27