Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views8 pages

Probability Cheatsheet

This document is a comprehensive cheat sheet on probability, covering key concepts such as conditional probability, independence, random variables, and various probability distributions. It includes mathematical definitions, formulas, and properties related to expectations, variance, and counting principles. The material is based on an online course by Professors John Tsitsiklis and Patrick Jaillet, and was compiled by Janus B. Advincula.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views8 pages

Probability Cheatsheet

This document is a comprehensive cheat sheet on probability, covering key concepts such as conditional probability, independence, random variables, and various probability distributions. It includes mathematical definitions, formulas, and properties related to expectations, variance, and counting principles. The material is based on an online course by Professors John Tsitsiklis and Patrick Jaillet, and was compiled by Janus B. Advincula.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

6.

431x Probability – The Science of Conditioning and Independence Combinations Given a set of n elements, the number of ways of
constructing an ordered sequence of k distinct elements is
Uncertainty and Data  n n!
This is a cheat sheet for probability based on the online course given by
Conditioning and Bayes’ Rule =
k k!(n − k)!
Prof. John Tsitsiklis and Prof. Patrick Jaillet. Compiled by Janus B. Conditional Probability We denote the probability of A, given that B
Advincula. occurred by P(A|B) and this is defined as Subsets The number of subsets of {1, . . . , n} is
Last Updated December 8, 2020 P(A ∩ B) n  
P(A|B) = , P(B) > 0.
X n n  n
n
P(B) = + ··· + =2 .
k 0 n
Probability Models and Axioms Conditional probabilities share properties of ordinary probabilities.
k=0

Partitions Given an n-element set and nonnegative integers


Sample Space The multiplication rule Given the definition of conditional probability, n1 , n2 , . . . , nr , whose sum is equal to n, the number of ways of partitioning
we have the set into r disjoint subsets, with the ith subset containing exactly ni
We begin by listing the possible outcomes, Ω, of an experiment, i.e., the P(A ∩ B) = P(B) P(A|B) = P(A) P(B|A). elements, is equal to
sample space. The list must be: n!
ˆ mutually exclusive, In general, we have
n n1 !n2 ! . . . nr !
.
ˆ collectively exhaustive, and P (A1 ∩ A2 ∩ · · · ∩ An ) = P(A1 )
Y
P (Ai |A1 ∩ · · · ∩ Ai−1 )
ˆ at the right granularity i=2 Discrete Random Variables
Probability Axioms Total probability theorem We partition the sample space into
A1 , A2 , A3 , . . . Then, Random Variables
An event, A, is a subset of the sample space. Probability is assigned to
events with the following axioms: P(B) = P(B ∩ A1 ) + P(B ∩ A2 ) + P(B ∩ A3 ) + . . . A random variable associates a value to every possible outcome. It can take
ˆ Nonnegativity: P(A) ≥ 0 = P(A1 )P(B|A1 ) + P(A2 )P(B|A2 ) + P(A3 )P(B|A3 ) + . . . discrete or continuous values.
ˆ Normalization: P(Ω) = 1 Probability Mass Function (PMF) Gives the probability that a
ˆ (Finite) Additivity:
X
= P(Ai )P(B|Ai ) discrete random variable takes on the value x.
If A ∩ B = ∅, then P(A ∩ B) = P(A) + P(B). i
pX (x) = P(X = x) = P({ω ∈ Ω : X(ω) = x})
Consequences of the Axioms Bayes’ Rule
ˆ P(A) ≤ 1
P(Ai |B) = P
P(Ai )P(B|Ai ) The PMF satisfies
ˆ
X
P(∅) = 0 P(Aj )P(B|Aj ) pX (x) ≥ 0 and pX (x) = 1.
ˆ P(A) + P(Ac ) = 1 j x
ˆ If A ⊂ B, then P(A) ≤ P(B)
ˆ Union bound: P(A ∪ B) ≤ P(A) + P(B) Independence Bernoulli random variable A Bernoulli random variable X with
parameter 0 ≤ p ≤ 1, X ∼ Ber(p), takes the following values:
Countable Additivity Axiom If A1 , A2 , A3 , . . . is an infinite sequence Independence of Two Events A and B are independent if knowing
whether A occurred gives no information about whether B occurred. More

of disjoint events, then 1, w.p. p,
formally, A and B (which have nonzero probability) are independent if and X =
P (A1 ∪ A2 ∪ . . . ) = P (A1 ) + P (A2 ) + . . . 0, w.p. 1 − p.
! only if one of the following equivalent statements holds:
[ X P(A ∩ B) = P(A)P(B) Discrete uniform random variable A discrete uniform random variable
or P Ai = P (Ai )
P(A|B) = P(A) X between a and b, X ∼ Uni[a, b], takes any of the values {a, a + 1, . . . , b}
i i 1
P(B|A) = P(B) with probability b−a+1 .
Discrete Uniform Law Assume that Ω is finite and consists of n equally
likely elements. Also, assume that A ⊂ Ω consists of k elements. Then, Binomial random variable A binomial random variable X with
Independence of Event Complements If A and B are independent, parameter n and p ∈ [0, 1], X ∼ Bin(n, p), takes values in the set
k
P(A) = . then A and B c are independent: {0, 1, . . . , n} and has PMF
n c c n
P A ∩ B = P(A)P B k n−k
pX (k) = p (1 − p) , for k = 0, 1, . . . , n.
Mathematical Background k
Sets A set is a collection of distinct elements. Conditional Independence A and B are conditionally independent
Geometric random variable A geometric random variable X with
x ∈ S ∪ T ⇔ x ∈ S or x ∈ T given C if
parameter p ∈ [0, 1], X ∼ Geo(p), takes values in the set {1, 2, . . . } with
P(A ∩ B|C) = P(A|C)P(B|C). probability
x ∈ S ∩ T ⇔ x ∈ S and x ∈ T
Conditional independence does not imply independence, and independence k
De Morgan’s Laws pX (k) = (1 − p) p.
!c does not imply conditional independence.
[ \ Expectation Value The expectation value of a discrete random variable
c
Sn = Sn is defined as
n n Counting E [X] =
X
xpX (x).
!c x
\ [ c
Sn = Sn
n n
Basic Counting Principle Expectation of a Bernoulli random variable The expected value of a
Bernoulli r.v. with parameter p is
Geometric Series For a selection that can be done in r stages, with ni choices at each stage i,
∞ the number of possible selections is E [X] = p.
X i 2 1
S= α = 1 + α + α + ··· = n1 n2 . . . n r .
i=0
1 − α Expectation of a Discrete Uniform random variable The expected
Permutations The number of ways of ordering n distinct elements is value of a discrete uniform r.v. is
Bonferroni’s Inequality
b+a
P(A1 ∩ · · · ∩ An ) ≥ P(A1 ) + · · · + P(An ) − (n − 1) n! = 1 · 2 · 3 · · · n. E [X] = .
2
Properties of Expectations What is the Probability Density Function (PDF)? The PDF f is A Deeper View of Conditioning
the derivative of the CDF F .
ˆ If X ≥ 0, then E [X] ≥ 0. 0
F (x) = f (x)
Conditional Expectation as a Random Variable

ˆ If a ≤ X ≤ b, then a ≤ E [X] ≤ b. A PDF is nonnegative and integrates to 1. By the fundamental theorem of


E [X|Y ] = g(Y )

ˆ If c is a constant, E [c] = c. calculus, to get from PDF back to CDF we can integrate:
Z x
Law of Iterated Expectations
ˆ Let X be a r.v. and let Y = g(X). Then, F (x) = f (t)dt E [E [X|Y ]] = E [X]
X −∞
E [Y ] = E [g(X)] = g(x)pX (x). Law of Total Variance

0.30

1.0
x
Var(X) = E [Var (X|Y )] + Var (E [X|Y ])

0.8
Linearity of Expectation

0.20
Sum of a Random Number of Independent RVs Let

0.6
CDF
PDF
Y = X1 + · · · + XN where N is a random variable. Then,
E [aX + b] = aE [X] + b

0.4
0.10
Var(Y ) = E [Var(Y |N )] + Var (E [Y |N ])

0.2
Variance and Standard Deviation = E[N ] Var(X) + (E[X]) Var(N )
2

0.00

0.0
−4 −2 0 2 4 −4 −2 0 2 4
Definition of Variance Variance is a measure of the spread of a PMF. x x

To find the probability that a CRV takes on a value in an interval,


For a random variable with mean µ = E[X], it is defined as
integrate the PDF over that interval.
Bayesian Inference
h i h i
2 2 2 Z b
Var(X) = E (X − µ) = E X − (E [X]) . Consider an unknown random variable Θ with a prior distribution pΘ (θ) or
F (b) − F (a) = f (x)dx
a fΘ (θ). We make an observation X modeled as a random variable with
Properties distribution pX|Θ (x|θ) or fX|Θ (x|θ). Using the appropriate version of the
2
Var(aX + b) = a Var(X) Further Topics on Random Variables Bayes’ rule, we can construct the appropriate posterior distribution
pΘ|X (θ|x) or fΘ|X (θ|x).
Standard Deviation q Covariance and Correlation
σX = Var(X) Point Estimates
Covariance is the analog of variance for two random variables.
Variance of a Bernoulli random variable Cov(X, Y ) = E ((X − E(X))(Y − E(Y ))) = E(XY ) − E(X)E(Y ) Maximum a posteriori probability (MAP) The MAP estimate, θ̂, is
the value at which the posterior distribution is maximum:
Var(X) = p(1 − p) Note that ∗
2 2 pΘ|X (θ |x) = max pΘ|X (θ|x)
Cov(X, X) = E(X ) − (E(X)) = Var(X) θ
Variance of a Discrete Uniform random variable Correlation is a standardized version of covariance that is always between ∗
−1 and 1. fΘ|X (θ |x) = max fΘ|X (θ|x).
1 θ
Var(X) = (b − a)(b − a + 1) Cov(X, Y )
12 Corr(X, Y ) = p Least Mean Squares (LMS) The LMS estimate is the conditional
Var(X)Var(Y ) expectation of the posterior distribution:
Conditional PMF and Expectation Covariance and Independence If two random variables are
θ̂ = E [Θ|X = x] .
independent, then they are uncorrelated. The converse is not necessarily
Total Expectation Theorem Given a random variable X and events true (e.g., consider X ∼ N (0, 1) and Y = X 2 ).
A1 , . . . , An , we have Discrete Θ, Discrete X
X ⊥
⊥ Y −→ Cov(X, Y ) = 0 −→ E(XY ) = E(X)E(Y )
E [X] = P(A1 )E [X|A1 ] + · · · + P(An )E [X|An ] . Covariance and Variance The variance of a sum can be found by pΘ (θ)pX|Θ (x|θ)
pΘ|X (θ|x) =
Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X, Y ) pX (x)
Continuous Random Variables n
X X pX (x) =
X 0
pΘ (θ )pX|Θ (x|θ )
0

Var(X1 + X2 + · · · + Xn ) = Var(Xi ) + 2 Cov(Xi , Xj ) θ0


i=1 i<j
Continuous Random Variables (CRVs) Discrete Θ, Continuous X
If X and Y are independent then they have covariance 0, so
Definition A random variable is continuous if it can be described by a pΘ (θ)fX|Θ (x|θ)
PDF, fX (x), such that X ⊥
⊥ Y =⇒ Var(X + Y ) = Var(X) + Var(Y ) pΘ|X (θ|x) =
If X1 , X2 , . . . , Xn are identically distributed and have the same covariance fX (x)
P(a ≤ X ≤ a + δ) ≈ fX (a) · δ. relationships (often by symmetry), then
X 0 0
fX (x) = pΘ (θ )fX|Θ (x|θ )
n
θ0
R∞
Expectation Assuming that −∞ |x|fX (x)dx < ∞, the expected value of Var(X1 + X2 + · · · + Xn ) = nVar(X1 ) + 2 Cov(X1 , X2 )
2 Continuous Θ, Continuous X
a random variable X is
Z ∞ Covariance Properties For random variables W, X, Y, Z and constants
E [X] = xfX (x)dx. a, b: fΘ (θ)fX|Θ (x|θ)
−∞ Cov(X, Y ) = Cov(Y, X) fΘ|X (θ|x) =
fX (x)
Cov(X + a, Y + b) = Cov(X, Y )
Z
Properties 0 0 0
fX (x) = fΘ (θ )fX|Θ (x|θ )dθ
Cov(aX, bY ) = abCov(X, Y )
ˆ If X ≥ 0, then E[X] ≥ 0.
Continuous Θ, Discrete X
ˆ If a ≤ X ≤ b, then a ≤ E[X] ≤ b.
Cov(W + X, Y + Z) = Cov(W, Y ) + Cov(W, Z) + Cov(X, Y )
ˆ Z ∞ + Cov(X, Z)
fΘ (θ)pX|Θ (x|θ)
E [g(X)] = g(x)fX (x)dx Correlation is location-invariant and scale-invariant For any fΘ|X (θ|x) =
pX (x)
−∞ constants a, b, c, d with a and c nonzero, Z
ˆ E[aX + b] = aE[X] + b Corr(aX + b, cY + d) = Corr(X, Y ) pX (x) =
0 0
fΘ (θ )pX|Θ (x|θ )dθ
0
Linear Least Mean Squares (LLMS) Estimation Convolutions MVN, LLN, CLT
In some cases, the conditional expectation E[Θ|X] may be hard to compute Convolution Integral If you want to find the PDF of the sum of two
or implement. In that case, we can restrict our attention to estimators of independent CRVs X and Y , you can do the following integral: Central Limit Theorem (CLT)
the form Θ̂ = aX + b. Then, Z ∞
fX+Y (t) = fX (x)fY (t − x)dx Approximation using CLT
Cov(Θ, X) −∞
Θ̂LLMS = E[Θ] + (X − E[X]) We use ∼˙ to denote is approximately distributed. We can use the Central
Var(X)
Example Let X, Y ∼ N (0, 1) be i.i.d. Then for each fixed t, Limit Theorem to approximate the distribution of a random variable
σΘ
= E[Θ] + ρ (X − E[X]) Y = X1 + X2 + · · · + Xn that is a sum of n i.i.d. random variables Xi . Let
σX Z ∞
1 E(Y ) = µY and Var(Y ) = σY2
. The CLT says
−x2 /2 1 −(t−x)2 /2
fX+Y (t) = √ e √ e dx
2π 2π 2
Limit Theorems and Classical Statistics −∞ Y ∼
˙ N (µY , σY )
By completing the square and using the fact that a Normal PDF integrates 2
If the Xi are i.i.d. with mean µX and variance σX , then µY = nµX and
to 1, this works out to fX+Y (t) being the N (0, 2) PDF. σY2
= nσX2
. For the sample mean X̄n , the CLT says
Markov Inequality
1
If X ≥ 0 and a > 0, then Bernoulli and Poisson Processes X̄n =
n
(X1 + X2 + · · · + Xn ) ∼
2
˙ N (µX , σX /n)
E [X]
P(X ≥ a) ≤ . Definition We have a Poisson process of rate λ arrivals per unit time if Asymptotic Distributions using CLT
a
the following conditions hold: D
We use −→ to denote converges in distribution to as n → ∞. The CLT
Chebyshev Inequality 1. The number of arrivals in a time interval of length t is Pois(λt). says that if we standardize the sum X1 + · · · + Xn then the distribution of
If the variance is small, then X is unlikely to be too far from the mean. the sum converges to N (0, 1) as n → ∞:
2. Numbers of arrivals in disjoint time intervals are independent.
1 D
σ2 √ (X1 + · · · + Xn − nµX ) −→ N (0, 1)
P (|X − µ| ≥ c) ≤ 2 For example, the numbers of arrivals in the time intervals [0, 5], (5, 12), and σ n
c [13, 23) are independent with Pois(5λ), Pois(7λ), Pois(10λ) distributions,
respectively. In other words, the CDF of the left-hand side goes to the standard Normal
The Weak Law of Large Numbers (WLLN) CDF, Φ. In terms of the sample mean, the CLT says

+
+

+
+
Let X1 , X2 , X3 , . . . be i.i.d. with mean µ and variance σ 2 . The sample n(X̄n − µX ) D
mean is 0 T1 T2 T3 T4 T5 −→ N (0, 1)
σX
X1 + · · · + Xn
Mn = . Count-Time Duality Consider a Poisson process of emails arriving in an
n inbox at rate λ emails per hour. Let Tn be the time of arrival of the nth Markov Chains
The Weak Law of Large Numbers states that: email (relative to some starting time 0) and Nt be the number of emails
that arrive in [0, t]. Let’s find the distribution of T1 . The event T1 > t, the Definition
for  > 0, P (|Mn − µ| ≥ ) → 0, as n → ∞. event that you have to wait more than t hours to get the first email, is the
same as the event Nt = 0, which is the event that there are no emails in the 5/12 7/12 7/8
Transformations first t hours. So
1 1/2 1/3 1/4
1 2 3 4 5
One Variable Transformations Let’s say that we have a random P (T1 > t) = P (Nt = 0) = e
−λt
−→ P (T1 ≤ t) = 1 − e
−λt 1/2 1/4 1/6 1/8
variable X with PDF fX (x), but we are also interested in some function of
X. We call this function Y = g(X). Also let y = g(x). If g is differentiable Thus we have T1 ∼ Expo(λ). By the memoryless property and similar A Markov chain is a random walk in a state space, which we will assume
and strictly increasing (or strictly decreasing), then the PDF of Y is reasoning, the interarrival times between emails are i.i.d. Expo(λ), i.e., the is finite, say {1, 2, . . . , M }. We let Xt denote which element of the state
differences Tn − Tn−1 are i.i.d. Expo(λ). space the walk is visiting at time t. The Markov chain is the sequence of
dx −1 d −1 random variables tracking where the walk is at all points in time,
fY (y) = fX (x) = fX (g (y)) g (y)
dy dy X0 , X1 , X2 , . . . . By definition, a Markov chain must satisfy the Markov
Order Statistics property, which says that if you want to predict where the chain will be at
The derivative of the inverse transformation is called the Jacobian. a future time, if we know the present state then the entire past history is
Two Variable Transformations Similarly, let’s say we know the joint Definition Let’s say you have n i.i.d. r.v.s X1 , X2 , . . . , Xn . If you arrange irrelevant. Given the present, the past and future are conditionally
PDF of U and V but are also interested in the random vector (X, Y ) them from smallest to largest, the ith element in that list is the ith order independent. In symbols,
defined by (X, Y ) = g(U, V ). Let statistic, denoted X(i) . So X(1) is the smallest in the list and X(n) is the P (Xn+1 = j|X0 = i0 , X1 = i1 , . . . , Xn = i) = P (Xn+1 = j|Xn = i)
! largest in the list.
∂u ∂u
∂(u, v)
= ∂x ∂y Note that the order statistics are dependent, e.g., learning X(4) = 42 gives State Properties
∂v ∂v
∂(x, y) ∂x ∂y us the information that X(1) , X(2) , X(3) are ≤ 42 and X(5) , X(6) , . . . , X(n) A state is either recurrent or transient.
be the Jacobian matrix. If the entries in this matrix exist and are
are ≥ 42. ˆ If you start at a recurrent state, then you will always return back
continuous, and the determinant of the matrix is never 0, then Distribution Taking n i.i.d. random variables X1 , X2 , . . . , Xn with CDF to that state at some point in the future. nYou can check-out any
F (x) and PDF f (x), the CDF and PDF of X(i) are: time you like, but you can never leave. n
fX,Y (x, y) = fU,V (u, v)
∂(u, v) ˆ Otherwise you are at a transient state. There is some positive
probability that once you leave you will never return. nYou don’t
n  
∂(x, y) X n k n−k
FX(i) (x) = P (X(i) ≤ x) = F (x) (1 − F (x))
k have to go home, but you can’t stay here. n
The inner bars tells us to take the matrix’s determinant, and the outer bars k=i
A state is either periodic or aperiodic.
tell us to take the absolute value. In a 2 × 2 matrix,
ˆ If you start at a periodic state of period k, then the GCD of the
n − 1
i−1 n−i
fX(i) (x) = n F (x) (1 − F (x)) f (x)
a b i−1 possible numbers of steps it would take to return back is k > 1.
= |ad − bc|
c d Uniform Order Statistics The jth order statistic of ˆ Otherwise you are at an aperiodic state. The GCD of the possible
i.i.d. U1 , . . . , Un ∼ Unif(0, 1) is U(j) ∼ Beta(j, n − j + 1). numbers of steps it would take to return back is 1.
Transition Matrix Continuous Distributions Gamma Distribution
Let the state space be {1, 2, . . . , M }. The transition matrix Q is the Gamma(3, 1) Gamma(3, 0.5)

M × M matrix where element qij is the probability that the chain goes Uniform Distribution
from state i to state j in one step:

0.10
0.2
Let us say that U is distributed Unif(a, b). We know the following:

PDF

PDF
qij = P (Xn+1 = j|Xn = i) Properties of the Uniform For a Uniform distribution, the probability

0.05
0.1
of a draw from any interval within the support is proportional to the length
To find the probability that the chain goes from state i to state j in exactly of the interval. See Universality of Uniform and Order Statistics for other

0.00
m steps, take the (i, j) element of Qm .

0.0
properties. 0 5 10 15 20 0 5 10 15 20
x x

(m) Example William throws darts really badly, so his darts are uniform over
qij = P (Xn+m = j|Xn = i) the whole room because they’re equally likely to appear anywhere.
Gamma(10, 1) Gamma(5, 0.5)

0.10
William’s darts have a Uniform distribution on the surface of the room.
If X0 is distributed according to the row vector PMF p
~, i.e.,

0.10
The Uniform is the only distribution where the probability of hitting in any
pj = P (X0 = j), then the PMF of Xn is p ~Qn . specific region is proportional to the length/area/volume of that region,

0.05
PDF

PDF
0.05
and where the density of occurrence in any one specific spot is constant
Chain Properties throughout the whole support.

0.00

0.00
A chain is irreducible if you can get from anywhere to anywhere. If a
chain (on a finite state space) is irreducible, then all of its states are
Normal Distribution 0 5 10
x
15 20 0 5 10
x
15 20

recurrent. A chain is periodic if any of its states are periodic, and is Let us say that X is distributed N (µ, σ 2 ). We know the following: Let us say that X is distributed Gamma(a, λ). We know the following:
aperiodic if none of its states are periodic. In an irreducible chain, all
Central Limit Theorem The Normal distribution is ubiquitous because
states have the same period. Story You sit waiting for shooting stars, where the waiting time for a star
of the Central Limit Theorem, which states that the sample mean of
is distributed Expo(λ). You want to see n shooting stars before you go
A chain is reversible with respect to ~ s if si qij = sj qji for all i, j. i.i.d. r.v.s will approach a Normal distribution as the sample size grows,
home. The total waiting time for the nth shooting star is Gamma(n, λ).
Examples of reversible chains include any chain with qij = qji , with regardless of the initial distribution.
1 1 1
s = ( M , M , . . . , M ), and random walk on an undirected network.
~ Location-Scale Transformation Every time we shift a Normal r.v. (by Example You are at a bank, and there are 3 people ahead of you. The
adding a constant) or rescale a Normal (by multiplying by a constant), we serving time for each person is Exponential with mean 2 minutes. Only one
change it to another Normal r.v. For any Normal X ∼ N (µ, σ 2 ), we can person at a time can be served. The distribution of your waiting time until
Stationary Distribution it’s your turn to be served is Gamma(3, 12 ).
transform it to the standard N (0, 1) by the following transformation:
Let us say that the vector ~ s = (s1 , s2 , . . . , sM ) be a PMF (written as a row X−µ
vector). We will call ~ s the stationary distribution for the chain if Z= ∼ N (0, 1) Beta Distribution
~
sQ = ~s. As a consequence, if Xt has the stationary distribution, then all σ
future Xt+1 , Xt+2 , . . . also have the stationary distribution. Standard Normal The Standard Normal, Z ∼ N (0, 1), has mean 0 and Beta(0.5, 0.5) Beta(2, 1)

2.0
5
variance 1. Its CDF is denoted by Φ.
For irreducible, aperiodic chains, the stationary distribution exists, is

1.5
unique, and si is the long-run probability of a chain being at state i. The
Exponential Distribution

3
PDF

PDF
1.0
expected number of steps to return to i starting from i is 1/si .

2
Let us say that X is distributed Expo(λ). We know the following:

0.5
To find the stationary distribution, you can solve the matrix equation

1
(Q0 − I)~
s 0 = 0. The stationary distribution is uniform if the columns of Q Story You’re sitting on an open meadow right before the break of dawn,

0.0
0
sum to 1. wishing that airplanes in the night sky were shooting stars, because you 0.0 0.2 0.4
x
0.6 0.8 1.0 0.0 0.2 0.4
x
0.6 0.8 1.0

could really use a wish right now. You know that shooting stars come on
Reversibility Condition Implies Stationarity If you have a PMF ~ s and average every 15 minutes, but a shooting star is not “due” to come just
Beta(2, 8) Beta(5, 5)

2.5
a Markov chain with transition matrix Q, then si qij = sj qji for all states because you’ve waited so long. Your waiting time is memoryless; the

2.0
i, j implies that ~
s is stationary. additional time until the next shooting star comes does not depend on how

1.5
long you’ve waited already.

2
PDF

PDF
Random Walk on an Undirected Network

1.0
Example The waiting time until the next shooting star is distributed

0.5
Expo(4) hours. Here λ = 4 is the rate parameter, since shooting stars

0.0
1

0
arrive at a rate of 1 per 1/4 hour on average. The expected time until the 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

next shooting star is 1/λ = 1/4 hour. x x

2 Expos as a rescaled Expo(1)


Y ∼ Expo(λ) → X = λY ∼ Expo(1) Conjugate Prior of the Binomial In the Bayesian approach to
3
Memorylessness The Exponential Distribution is the only continuous statistics, parameters are viewed as random variables, to reflect our
memoryless distribution. The memoryless property says that for uncertainty. The prior for a parameter is its distribution before observing
4
X ∼ Expo(λ) and any positive numbers s and t, data. The posterior is the distribution for the parameter after observing
data. Beta is the conjugate prior of the Binomial because if you have a
5
P (X > s + t|X > s) = P (X > t) Beta-distributed prior on p in a Binomial, then the posterior distribution
Equivalently, on p given the Binomial data is also Beta-distributed. Consider the
X − a|(X > a) ∼ Expo(λ) following two-level model:
If you have a collection of nodes, pairs of which can be connected by For example, a product with an Expo(λ) lifetime is always “as good as
undirected edges, and a Markov chain is run by going from the current new” (it doesn’t experience wear and tear). Given that the product has X|p ∼ Bin(n, p)
node to a uniformly random node that is connected to it by an edge, then survived a years, the additional time that it will last is still Expo(λ). p ∼ Beta(a, b)
this is a random walk on an undirected network. The stationary
distribution of this chain is proportional to the degree sequence (this is Min of Expos If we have independent Xi ∼ Expo(λi ), then
Then after observing X = x, we get the posterior distribution
the sequence of degrees, where the degree of a node is how many edges are min(X1 , . . . , Xk ) ∼ Expo(λ1 + λ2 + · · · + λk ).
attached to it). For example, the stationary distribution of random walk on Max of Expos If we have i.i.d. Xi ∼ Expo(λ), then max(X1 , . . . , Xk ) has p|(X = x) ∼ Beta(a + x, b + n − x)
the network shown above is proportional to (3, 3, 2, 4, 2), so it’s the same distribution as Y1 + Y2 + · · · + Yk , where Yj ∼ Expo(jλ) and the
3 3 2 4 2
( 14 , 14 , 14 , 14 , 14 ). Yj are independent. Order statistics of the Uniform See Order Statistics.
Beta-Gamma relationship If X ∼ Gamma(a, λ), Y ∼ Gamma(b, λ), Properties Let X ∼ Bin(n, p), Y ∼ Bin(m, p) with X ⊥
⊥ Y. Poisson Distribution
with X ⊥
⊥ Y then
Let us say that X is distributed Pois(λ). We know the following:
ˆ Redefine success n − X ∼ Bin(n, 1 − p)
ˆ X
X+Y ∼ Beta(a, b) Story There are rare events (low probability events) that occur many
ˆ Sum X + Y ∼ Bin(n + m, p)
ˆ X+Y ⊥
⊥ X
X+Y
different ways (high possibilities of occurences) at an average rate of λ
occurrences per unit space or time. The number of events that occur in
ˆ Conditional X|(X + Y = r) ∼ HGeom(n, m, r) that unit of space or time is X.
This is known as the bank–post office result.
ˆ Binomial-Poisson Relationship Bin(n, p) is approximately Pois(λ) Example A certain busy intersection has an average of 2 accidents per
χ2 (Chi-Square) Distribution if p is small. month. Since an accident is a low probability event that can happen many
Let us say that X is distributed χ2n . We know the following: ˆ Binomial-Normal Relationship Bin(n, p) is approximately
different ways, it is reasonable to model the number of accidents in a month
at that intersection as Pois(2). Then the number of accidents that happen
Story A Chi-Square(n) is the sum of the squares of n independent N (np, np(1 − p)) if n is large and p is not near 0 or 1.
in two months at that intersection is distributed Pois(4).
standard Normal r.v.s.
Properties Let X ∼ Pois(λ1 ) and Y ∼ Pois(λ2 ), with X ⊥
⊥ Y.
Properties and Representations Geometric Distribution
2 2 2 Let us say that X is distributed Geom(p). We know the following: 1. Sum X + Y ∼ Pois(λ1 + λ2 )
X is distributed as Z1 + Z2 + · · · + Zn for i.i.d. Zi ∼ N (0, 1)  
λ1
X ∼ Gamma(n/2, 1/2) Story X is the number of “failures” that we will achieve before we achieve 2. Conditional X|(X + Y = n) ∼ Bin n, λ1 +λ2
our first success. Our successes have probability p.
3. Chicken-egg If there are Z ∼ Pois(λ) items and we randomly and
Discrete Distributions Example If each pokeball we throw has probability 10 1
to catch Mew, the independently “accept” each item with probability p, then the
1
number of failed pokeballs will be distributed Geom( 10 ). number of accepted items Z1 ∼ Pois(λp), and the number of rejected
items Z2 ∼ Pois(λ(1 − p)), and Z1 ⊥
⊥ Z2 .
Distributions for four sampling schemes
Replace No Replace First Success Distribution
Multivariate Distributions
Fixed # trials (n) Binomial HGeom Equivalent to the Geometric distribution, except that it includes the first
(Bern if n = 1) success in the count. This is 1 more than the number of failures. If
Draw until r success NBin NHGeom X ∼ FS(p) then E(X) = 1/p. Multinomial Distribution
(Geom if r = 1)
Let us say that the vector X ~ = (X1 , X2 , X3 , . . . , Xk ) ∼ Multk (n, p
~) where
Negative Binomial Distribution p
~ = (p1 , p2 , . . . , pk ).
Bernoulli Distribution Let us say that X is distributed NBin(r, p). We know the following: Story We have n items, which can fall into any one of the k buckets
The Bernoulli distribution is the simplest case of the Binomial distribution, independently with the probabilities p
~ = (p1 , p2 , . . . , pk ).
where we only have one trial (n = 1). Let us say that X is distributed Story X is the number of “failures” that we will have before we achieve
our rth success. Our successes have probability p. Example Let us assume that every year, 100 students in the Harry Potter
Bern(p). We know the following: Universe are randomly and independently sorted into one of four houses
Story A trial is performed with probability p of “success”, and X is the Example Thundershock has 60% accuracy and can faint a wild Raticate with equal probability. The number of people in each of the houses is
indicator of success: 1 means success, 0 means failure. in 3 hits. The number of misses before Pikachu faints Raticate with distributed Mult4 (100, p
~), where p
~ = (0.25, 0.25, 0.25, 0.25). Note that
Example Let X be the indicator of Heads for a fair coin toss. Then Thundershock is distributed NBin(3, 0.6). X1 + X2 + · · · + X4 = 100, and they are dependent.
X ∼ Bern( 21 ). Also, 1 − X ∼ Bern( 21 ) is the indicator of Tails. Joint PMF For n = n1 + n2 + · · · + nk ,
Hypergeometric Distribution
Binomial Distribution ~ =~
P (X n) =
n! n n n
p 1 p 2 . . . pk k
Let us say that X is distributed HGeom(w, b, n). We know the following: n1 !n2 ! . . . nk ! 1 2
Bin(10,1/2)
Story In a population of w desired objects and b undesired objects, X is Marginal PMF, Lumping, and Conditionals Marginally,
0.30

the number of “successes” we will have in a draw of n objects, without Xi ∼ Bin(n, pi ) since we can define “success” to mean category i. If you
0.25


replacement. The draw of n objects is assumed to be a simple random lump together multiple categories in a Multinomial, then it is still
0.20

● ●
sample (all sets of n objects are equally likely). Multinomial. For example, Xi + Xj ∼ Bin(n, pi + pj ) for i 6= j since we can
0.15
pmf

● ● Examples Here are some HGeom examples. define “success” to mean being in category i or j. Similarly, if k = 6 and we
0.10

lump categories 1-2 and lump categories 3-5, then


ˆ
0.05

● ●
Let’s say that we have only b Weedles (failure) and w Pikachus (X1 + X2 , X3 + X4 + X5 , X6 ) ∼ Mult3 (n, (p1 + p2 , p3 + p4 + p5 , p6 ))
0.00

● ●
● ●
(success) in Viridian Forest. We encounter n Pokemon in the forest,
0 2 4 6 8 10
and X is the number of Pikachus in our encounters. Conditioning on some Xj also still gives a Multinomial:
x

Let us say that X is distributed Bin(n, p). We know the following: ˆ The number of Aces in a 5 card hand.
 
p1 pk−1

X1 , . . . , Xk−1 |Xk = nk ∼ Multk−1 n − nk , ,...,
Story X is the number of “successes” that we will achieve in n ˆ You have w white balls and b black balls, and you draw n balls. You 1 − pk 1 − pk
independent trials, where each trial is either a success or a failure, each will draw X white balls.
with the same probability p of success. We can also write X as a sum of Variances and Covariances We have Xi ∼ Bin(n, pi ) marginally, so
multiple independent Bern(p) random variables. Let X ∼ Bin(n, p) and ˆ You have w white balls and b black balls, and you draw n balls Var(Xi ) = npi (1 − pi ). Also, Cov(Xi , Xj ) = −npi pj for i 6= j.
Xj ∼ Bern(p), where all of the Bernoullis are independent. Then without replacement. The number of white balls in your sample is
X = X1 + X2 + X3 + · · · + Xn
HGeom(w, b, n); the number of black balls is HGeom(b, w, n). Multivariate Uniform Distribution
ˆ Capture-recapture A forest has N elk, you capture n of them, tag See the univariate Uniform for stories and examples. For the 2D Uniform
Example If Jeremy Lin makes 10 free throws and each one independently them, and release them. Then you recapture a new sample of size m. on some region, probability is proportional to area. Every point in the
has a 43 chance of getting in, then the number of free throws he makes is How many tagged elk are now in the new sample? support has equal density, of value area of1 region . For the 3D Uniform,
distributed Bin(10, 34 ). HGeom(n, N − n, m) probability is proportional to volume.
Multivariate Normal (MVN) Distribution Formulas The table above gives R commands for working with various named
distributions. Commands analogous to pbinom, qbinom, and rbinom work for
~ = (X1 , X2 , . . . , Xk ) is Multivariate Normal if every linear
A vector X the other distributions in the table. For example, pnorm, qnorm, and rnorm
combination is Normally distributed, i.e., t1 X1 + t2 X2 + · · · + tk Xk is Geometric Series can be used to get the CDF, quantiles, and random generation for the
Normal for any constants t1 , t2 , . . . , tk . The parameters of the Multivariate Normal. For the Multinomial, dmultinom can be used for calculating the
n−1
Normal are the mean vector µ ~ = (µ1 , µ2 , . . . , µk ) and the covariance 2 n−1
X k 1 − rn joint PMF and rmultinom can be used for generating random vectors. For
matrix where the (i, j) entry is Cov(Xi , Xj ). 1 + r + r + ··· + r = r =
k=0
1−r the Multivariate Normal, after installing and loading the mvtnorm package
dmvnorm can be used for calculating the joint PDF and rmvnorm can be used
Properties The Multivariate Normal has the following properties. 2 1
1 + r + r + ··· = if |r| < 1 for generating random vectors.
1−r
ˆ Any subvector is also MVN.
ˆ If any two elements within an MVN are uncorrelated, then they are Exponential Function (ex ) Recommended Resources
independent. ∞ n 2 3  n
x
X x x x x
+ · · · = lim
ˆ The joint PDF of a Bivariate Normal (X, Y ) with N (0, 1) marginal
e =
n=0
n!
=1+x+
2!
+
3! n→∞
1+
n ˆ Introduction to Probability Book (http://bit.ly/introprobability)
distributions and correlation ρ ∈ (−1, 1) is
  Gamma and Beta Integrals ˆ Stat 110 Online (http://stat110.net)
1 1
fX,Y (x, y) =
2πτ 2τ
2 2
exp − 2 (x + y − 2ρxy) , You can sometimes solve complicated-looking integrals by pattern-matching ˆ Stat 110 Quora Blog (https://stat110.quora.com/)
p to a gamma or beta integral: ˆ Quora Probability FAQ (http://bit.ly/probabilityfaq)
with τ = 1 − ρ2 . Z ∞
t−1 −x
Z 1
a−1 b−1 Γ(a)Γ(b) ˆ R Studio (https://www.rstudio.com)
x e dx = Γ(t) x (1 − x) dx =
0 0 Γ(a + b) ˆ LaTeX File (github.com/mynameisjanus/6431xProbability)
Distribution Properties
Also, Γ(a + 1) = aΓ(a), and Γ(n) = (n − 1)! if n is a positive integer.
Please share this cheatsheet with friends!
Important CDFs Euler’s Approximation for Harmonic Sums
Standard Normal Φ 1 1 1
1+ + + ··· + ≈ log n + 0.577 . . .
Exponential(λ) F (x) = 1 − e −λx
, for x ∈ (0, ∞) 2 3 n

Uniform(0,1) F (x) = x, for x ∈ (0, 1) Stirling’s Approximation for Factorials



 n
n
Convolutions of Random Variables n! ≈ 2πn
e
A convolution of n random variables is simply their sum. For the following
results, let X and Y be independent. Miscellaneous Definitions
1. X ∼ Pois(λ1 ), Y ∼ Pois(λ2 ) −→ X + Y ∼ Pois(λ1 + λ2 )
Medians and Quantiles Let X have CDF F . Then X has median m if
2. X ∼ Bin(n1 , p), Y ∼ Bin(n2 , p) −→ X + Y ∼ Bin(n1 + n2 , p). F (m) ≥ 0.5 and P (X ≥ m) ≥ 0.5. For X continuous, m satisfies
Bin(n, p) can be thought of as a sum of i.i.d. Bern(p) r.v.s. F (m) = 1/2. In general, the ath quantile of X is min{x : F (x) ≥ a}; the
median is the case a = 1/2.
3. X ∼ Gamma(a1 , λ), Y ∼ Gamma(a2 , λ)
−→ X + Y ∼ Gamma(a1 + a2 , λ). Gamma(n, λ) with n an integer log Statisticians generally use log to refer to natural log (i.e., base e).
can be thought of as a sum of i.i.d. Expo(λ) r.v.s.
i.i.d r.v.s Independent, identically-distributed random variables.
4. X ∼ NBin(r1 , p), Y ∼ NBin(r2 , p) −→ X + Y ∼ NBin(r1 + r2 , p).
NBin(r, p) can be thought of as a sum of i.i.d. Geom(p) r.v.s. Distributions in R
5. X ∼ N (µ1 , σ12 ), Y ∼ N (µ2 , σ22 ) −→ X + Y ∼ N (µ1 + µ2 , σ12 + σ22 )

Special Cases of Distributions Command What it does


1. Bin(1, p) ∼ Bern(p) help(distributions) shows documentation on distributions
dbinom(k,n,p) PMF P (X = k) for X ∼ Bin(n, p)
2. Beta(1, 1) ∼ Unif(0, 1) pbinom(x,n,p) CDF P (X ≤ x) for X ∼ Bin(n, p)
3. Gamma(1, λ) ∼ Expo(λ) qbinom(a,n,p) ath quantile for X ∼ Bin(n, p)
rbinom(r,n,p) vector of r i.i.d. Bin(n, p) r.v.s
4. χ2n ∼ Gamma n 1

2, 2 dgeom(k,p) PMF P (X = k) for X ∼ Geom(p)
5. NBin(1, p) ∼ Geom(p) dhyper(k,w,b,n) PMF P (X = k) for X ∼ HGeom(w, b, n)
dnbinom(k,r,p) PMF P (X = k) for X ∼ NBin(r, p)
dpois(k,r) PMF P (X = k) for X ∼ Pois(r)
Inequalities dbeta(x,a,b) PDF f (x) for X ∼ Beta(a, b)
dchisq(x,n) PDF f (x) for X ∼ χ2n
p
1. Cauchy-Schwarz |E(XY )| ≤ E(X 2 )E(Y 2 )
dexp(x,b) PDF f (x) for X ∼ Expo(b)
E|X|
2. Markov P (X ≥ a) ≤ a for a > 0 dgamma(x,a,r) PDF f (x) for X ∼ Gamma(a, r)
dlnorm(x,m,s) PDF f (x) for X ∼ LN (m, s2 )
σ2
3. Chebyshev P (|X − µ| ≥ a) ≤ a2
for E(X) = µ, Var(X) = σ 2 dnorm(x,m,s) PDF f (x) for X ∼ N (m, s2 )
dt(x,n) PDF f (x) for X ∼ tn
4. Jensen E(g(X)) ≥ g(E(X)) for g convex; reverse if g is concave dunif(x,a,b) PDF f (x) for X ∼ Unif(a, b)
Table of Distributions

Distribution PMF/PDF and Support Expected Value Variance MGF

Bernoulli P (X = 1) = p
Bern(p) P (X = 0) = q = 1 − p p pq q + pet

n k n−k
Binomial P (X = k) = k
p q
Bin(n, p) k ∈ {0, 1, 2, . . . n} np npq (q + pet )n

Geometric P (X = k) = q k p
p
Geom(p) k ∈ {0, 1, 2, . . . } q/p q/p2 1−qet
, qet < 1

r+n−1 r n
Negative Binomial P (X = n) = r−1
p q
p
NBin(r, p) n ∈ {0, 1, 2, . . . } rq/p rq/p2 ( 1−qe r t
t ) , qe < 1

  
w+b
 
P (X = k) = w b /
Hypergeometric k n−k n  
nw w+b−n µ µ
HGeom(w, b, n) k ∈ {0, 1, 2, . . . , n} µ= b+w w+b−1
nn (1 − n
) messy

e−λ λk
Poisson P (X = k) = k!
t
Pois(λ) k ∈ {0, 1, 2, . . . } λ λ eλ(e −1)

1
Uniform f (x) = b−a
a+b (b−a)2 etb −eta
Unif(a, b) x ∈ (a, b) 2 12 t(b−a)

2 2
f (x) = √1 e−(x − µ) /(2σ )
Normal σ 2π
σ 2 t2
N (µ, σ 2 ) x ∈ (−∞, ∞) µ σ2 etµ+ 2

Exponential f (x) = λe−λx


1 1 λ
Expo(λ) x ∈ (0, ∞) λ λ2 λ−t
, t<λ

1
f (x) = Γ(a)
(λx)a e−λx x1
Gamma  a
a a λ
Gamma(a, λ) x ∈ (0, ∞) λ λ2 λ−t
,t<λ

Γ(a+b) a−1
f (x) = Γ(a)Γ(b)
x (1 − x)b−1
Beta
a µ(1−µ)
Beta(a, b) x ∈ (0, 1) µ= a+b (a+b+1)
messy

2
1
√ e−(log x−µ) /(2σ 2 )
Log-Normal xσ 2π
2 2
LN (µ, σ 2 ) x ∈ (0, ∞) θ = eµ+σ /2 θ2 (eσ − 1) doesn’t exist

1
Chi-Square xn/2−1 e−x/2
2n/2 Γ(n/2)
χ2n x ∈ (0, ∞) n 2n (1 − 2t)−n/2 , t < 1/2

Γ((n+1)/2)

nπΓ(n/2)
(1 + x2 /n)−(n+1)/2
Student-t
n
tn x ∈ (−∞, ∞) 0 if n > 1 n−2
if n > 2 doesn’t exist

You might also like