Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
1 views7 pages

Lecture 1 - Probabilities

Uploaded by

guillaume.ewald
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views7 pages

Lecture 1 - Probabilities

Uploaded by

guillaume.ewald
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Lecture 1: Introduction to applied probability

and probabilistic models


B. Figliuzzi
November, 14th, 2022

In these lecture notes, we recall classical results of measure theory and


probability theory that will be extensively used during this week’s course.

1 σ-algebra
Probability theory aims at providing mathematical tools for describing ran-
dom events or experiments. Let Ω be the fundamental set of all possible
outcomes of a random experiment. The aim of probability theory is to quan-
tify the occurrence of some subsets of Ω, called events. Let us consider for
instance all possible outcomes of a dice throw. In this case, the fundamental
set Ω will be constituted by the outcomes
Ω = {1, 2, 3, 4, 5, 6}.
Events can be defined as subsets of Ω. For instance, the event ”the out-
come of the dice throw is 5” simply corresponds to the subset {5}. This
approach allows us to consider more complicated events. For instance, the
event ”the outcome of the dice throw is NOT 5” corresponds to the subset
{1, 2, 3, 4, 6} = {5}c . Similarly, the event ”the outcome of the dice throw
is strictly less than 5” corresponds to the subset {1, 2, 3, 4}. In general, we
can note than the conjonction or on the contrary the disjonction of events, as
well as the negation of events, are events too. In mathematical terms, the set
of all events thus satisfies the algebraic properties of an algebraic structure
called a σ-algebra.
Definition 1.1 A σ-algebra A on Ω is a class of subsets of Ω such that
- ∅ ∈ A,
- If A ∈ A, then Ac ∈ A,
- For all countable family I, if for all i ∈ I, Ai ∈ A, ∪i∈I A ∈ A.
The set Ω along with its σ-algebra A is called a measurable space (Ω, A).

1
Problem 1.1 Show that a σ-algebra is stable by countable intersection.

Problem 1.2 Let Ω be some fundamental set. Check that (Ω, P(Ω)) is a
measurable space, where P(Ω) denotes the set of all subsets of Ω.

For non countable fundamental sets, the σ-algebra (Ω, P(Ω)) can be com-
plicated. Therefore, one often considers simpler σ-algebra generated by some
class of subsets of Ω.

Definition 1.2 The σ-algebra generated by a class C of subsets of Ω is the


smallest σ-algebra containing C. In particular, when Ω = Rd , the σ-algebra
B(Rd ) generated by the open sets of Rd is called the Borelian σ-algebra of Rd .

2 Measures and probability


A measure on a set is a positive number assigned to each suitable subset of
that set, which intuitively characterizes its size. In this sense, a measure can
be conveniently seen as a generalization of the concepts of length, area, and
volume. Technically, it is a function that assigns a non-negative real number
or +∞ to (certain) subsets of a set E. It must assign 0 to the empty set and
be countably additive: if we consider a large subset F of E that we decompose
in smaller disjoint subsets, the measure of F will necessarily be the sum of
the measures of the smaller subsets. Probability theory strongly relies on the
notion of measure. It considers measures that assign to the whole set the
size 1, and interpret measurable subsets as events whose probability is given
by the measure.

2.1 General definition


A measure is a function that associates to each element of a σ-algebra a
positive real number. The area in R2 is a simple example of a measure
defined on the measurable space (R2 , B(R2 )).

Definition 2.1 Let (Ω, A) be a measurable space. A measure on (Ω, A) is a


function m : A → R+ ∪ ∞ such that
- m(∅) = 0.
- m is σ-additive, meaning that
  X +∞
+∞
m ∪i=1 Ai = m(Ai )
i=1

with Ai ∩ Aj = ∅ if i 6= j for all elements Ai and Aj of the σ-algebra A.

2
A measure is said to be finite if the measure of the whole space is finite:
m(Ω) < ∞.

Examples

1. An example of measure is provided by the Dirac measure δx associated


to x:
δx (y) = 1 if y = x, 0 otherwise.

2. Another fundamental example is the indicative function of the subset


A of Ω.
1A (x) = 1 if x ∈ A, 0 otherwise.

3. An additional example is the Lebesgue measure. Let (Rd , B(Rd )) be


the euclidean measurable space of dimension d with its Borel σ-algebra.
A Radon measure on B(Rd ) is a measure m such that for all bounded
subset B of B(Rd ), m(B) < ∞.
Among all Radon measures, the Lebesgue measure plays a particular
role. The Lebesgue measure is first defined on hypercubes of Rd to be

µ(Q) = (x11 − x10 )..(xd1 − xd0 ),

where Q is the hypercube [x10 , x11 ] × ... × [xd0 , xd1 ], and can next be gener-
alized to any subset in B(Rd ). In R3 (resp. R2 ), the Lebesgue measure
of a domain is simply its volume (resp. area).
One can easily check that the Lebesgue measure has the property to
be isometry-invariant. For instance, in the plane R2 , if we translate
and/or rotate some domain, its area remains unchanged. In addition,
we have the fundamental result:

Theorem 2.1 Let ν be some Radon measure on (Rd , B(Rd )). If ν is


isometry-invariant, then there exist a real number λ > 0 such that
ν = λµd , where µd is the Lebesgue measure on (Rd , B(Rd )).

2.2 Probability measure


A probability measure is a measure such that the measure of the fundamental
space is m(Ω) = 1. If we go back to our first example of a dice throw, the
probability that the result belongs to the set Ω = {1, 2, 3, 4, 5, 6} is indeed
1 and the measure of each subset is simply interpreted as the probability of
the corresponding event.

3
Problem 2.2 Let P be a probability measure on some measurable space
(Ω, A) and {Ai } be some family of events. Show that:
- if Ai ⊂ Aj ,P
then p(Ai ) ≤ p(Aj )
- p(∪i Ai ) ≤ i p(Ai ).

3 Measurable functions and random variables


Definition 3.1 Let f : E → F be some function between two measurable
spaces (E, E) and (F, F). f is said to be measurable if for all element B of
the σ-algebra F, f −1 (B) is an element of the σ-algebra E.

In practice, most usual functions are measurable. In particular, all continu-


0
ous functions from Rd to Rd are measurable for the Borel σ-algebra.

Intuitively, the notion of measurability ensures that the output of the


function f remains ”compatible” in some mathematical sense with the σ-
algebra structure of the measurable space (E, E).

In probability theory, a random variable X is a measurable function from


the fundamental set Ω of all possible outcomes of some random experiment.
As an example, we can consider the events constituted by two dice rollings.
The function
X : (n1 , n2 ) ⊂ Ω × Ω → n1 + n1 ∈ N
which associates their sum to the results of two dice rolls is a random variable
on Ω×Ω. For a random variable, the notion of measurability allows to specify
the probability of each measurable subsets B ⊂ F based upon the probability
P as defined on the probability space (E, E, P ):

P {X ∈ B} = P {X −1 (B)}. (1)

Definition 3.2 (Expected value) If X is a random variable defined on


a probability space (Ω, A, P ), the expected value of X, denoted by E[X], is
defined as the integral
Z
E[X] := X(ω)P (dω). (2)

From its definition, it is clear that the expected value is a linear operator.

4
Independance Two random variables X and Y are said to be independent
if the joint probability P (X, Y ) can be factorized as
P (X, Y ) = P (X)P (Y ).
Intuitively, two random variables are independent if the realization of one
does not affect the probability distribution of the other.

3.1 Discrete random variables


Discrete random variables take their values in a specified finite or countable
list of values {xi , i ∈ N}. For i ∈ N, the probability law pi of a discrete
random variable X specifies the probability that the random variable takes
the value xi :
P {X = xi } = pi . (3)
The expectation of the random variable is then given by:
+∞
X
E[X] = xi p i . (4)
i=0

Bernoulli random variables A Bernoulli random variable is a random


variable that takes its values in the subset {0, 1} of N. The probability law
of a Bernoulli random variable is given by
P {X = 1} = p, P {X = 0} = 1 − p, (5)
where p is some real number between 0 and 1. Using Eq. (4), it is straight-
forward to show that
E[X] = p. (6)

Binomial random variables A binomial random variable is defined as


the sum of n independent Bernoulli random variables Y1 , ..., Yn :
X = Y1 + ... + Yn (7)
It is a discrete random variable that takes its values in the set {0, ..., n} ⊂ N.
The probability law of a binomial random variable is given by:
n  
X n k
∀k = 0, ..., n, pk := P {X = k} = p (1 − p)n−k . (8)
k=0
k
Using the independence property, we find that
Xn n
X
E[X] = E[ Yk ] = E[Yk ] = np. (9)
k=1 k=1

5
Poisson random variables A Poisson random variable is a discrete ran-
dom variable taking its values in N according to the probability law
θk
pk := P {X = k} = exp(−θ), (10)
k!
where θ > 0 is a strictly positive parameter characterizing the distribution.
Problem 3.1 Show that the esperance of a Poisson random variable X with
Poisson parameter θ is
E[X] = θ.

3.2 Continuous random variables


Continuous random variables are random variables that takes their value in
a continuous set. In this paragraph, we restrict our study to the case of
probabilities defined by a density functional p.

Let us consider the measurable space (Rd , B(R)). Let p : Rd → R be a


non-negative function such that
Z
p(r)dr = 1,
Rd

dr being the Lebesgue measure on RD . For each Borel set A of Rd , let us


consider the quantity Z
µ(A) = p(r)dr.
A
d
It is clear that the functional µ : B(R ) → R+ is a probability measure. The
function p is called the density associated to the measure µ.
Let X be a random variable on a probability space (Ω, A, P ) taking its
values in R. We assume that the probability law of X is described by a
density functional p:
Z
∀A ∈ B(R), P {X ∈ A} = p(x)dx.
A

In this case, the expected value of X is


Z
E[X] = X(x)p(x)dx.
R

Similarly, its variance is


Z
var[X] = (X(x) − E[X])2 p(x)dx.
R

6
Continuous random variables are often characterized by their cumulative
distribution F , defined by:
Z x
F (x) = P {X < x} = p(t)dt.
−∞

Obviously, F (x) → 0 when x → −∞ and F (x) → 1 when x → +∞.


An example of probability density is provided by the uniform law on some
interval [a, b] of R. For the uniform law, the probability density is
1
p(x) = .
b−a
Hence, we find, for all c such that a ≤ x ≤ b,
Z b
dx b−x
P {X > x} = = .
x b−a b−a
Another fundamental example of probability density is provided by the gamma
distribution. The gamma distribution is characterized by two parameters,
namely the shape parameter k and the scale parameter λ. Its density is
given by
xk−1 exp(− λx )
p(x) = .
Γ(k)λk

Problem 3.2 Calculate the expectation and the variance of the gamma law.

4 Notes
The lecture notes of Le Gall [1] provide a very good introduction to measure,
integration and probability theory.

References
[1] J.-F. le Gall. Intégration, probabilités et processus aléatoires.

You might also like