Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views91 pages

Stochpopmodels

Stochastic population models

Uploaded by

K North
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views91 pages

Stochpopmodels

Stochastic population models

Uploaded by

K North
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Introduction to Stochastic Population

Models
Thomas E. Wehrly
Department of Statistics
Texas A&M University

June 13, 2005

0-0
Mathematics 669

Contents
1 Probability and Random Variables 1
1.1 The Basic Ideas of Probability . . . . . . . . . . . . . . . 1
1.1.1 Sample Spaces and Events . . . . . . . . . . . . . 1
1.1.2 Conditional Probability . . . . . . . . . . . . . . . 4
1.1.3 Independence . . . . . . . . . . . . . . . . . . . 6
1.2 Random Variables . . . . . . . . . . . . . . . . . . . . . 6
1.3 Probability Distributions of a Discrete R.V. . . . . . . . . . 8
1.3.1 Parameters of Probability Distributions . . . . . . . . 9
1.3.2 Expected Values of Discrete RV . . . . . . . . . . . 10
1.4 Continuous Random Variables . . . . . . . . . . . . . . . 12
1.4.1 Percentiles . . . . . . . . . . . . . . . . . . . . . 14
1.4.2 Expected Values, Mean and Variance . . . . . . . . 14
1.5 Joint Probability Distributions . . . . . . . . . . . . . . . . 15
1.5.1 Conditional Distributions . . . . . . . . . . . . . . 16
1.5.2 Conditional Distributions for Bivariate Continuous RVs 17
1.6 Some Special Cases . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Poisson Distribution . . . . . . . . . . . . . . . . . 19
1.6.2 The Poisson Process . . . . . . . . . . . . . . . . 20
1.6.3 Normal Distribution . . . . . . . . . . . . . . . . . 24
1.7 Gamma Distribution . . . . . . . . . . . . . . . . . . . . 26
1.7.1 Distribution of Elapsed Time in the Poisson Process . 29

2 An Introduction to Stochastic Population Models 30


2.1 Some Ecological Examples . . . . . . . . . . . . . . . . . 30
2.2 A Quick Contrast Between Deterministic and Stochastic Models 31
2.2.1 Deterministic Model . . . . . . . . . . . . . . . . . 31
2.2.2 Stochastic Model . . . . . . . . . . . . . . . . . . 32

Table of Contents Copyright °


c 2005 by Thomas E. Wehrly Slide 0
Mathematics 669

3 Basic Methods for Single Population Models 39


3.1 Moments of X(t) . . . . . . . . . . . . . . . . . . . . . 41
3.2 Simulation of the Stochastic Process . . . . . . . . . . . . 42
3.3 Kolmogorov Differential Equations for Probability Functions . 44
3.4 Generating Functions . . . . . . . . . . . . . . . . . . . 47
3.5 PDEs for Cumulant Generating Functions . . . . . . . . . . 50

4 Some Linear One-Population Models 55


4.1 Linear Immigration-Death Models . . . . . . . . . . . . . . 55
4.1.1 Deterministic Model . . . . . . . . . . . . . . . . . 55
4.1.2 Stochastic Model . . . . . . . . . . . . . . . . . . 57
4.1.3 Application to the AHB Population Dynamics . . . . . 58
4.2 Linear Birth-Immigration-Death Models . . . . . . . . . . . 62
4.2.1 Solution to the Deterministic Model . . . . . . . . . 62
4.2.2 Probability Distributions for the Stochastic Model . . 63
4.2.3 Generating Functions . . . . . . . . . . . . . . . . 64
4.2.4 Application to AHB . . . . . . . . . . . . . . . . . 66
4.2.5 Simulation of the LBID Process . . . . . . . . . . . 67

5 Some Nonlinear One-Population Models 70


5.1 Nonlinear Birth–Death Models . . . . . . . . . . . . . . . 70
5.1.1 Deterministic Model . . . . . . . . . . . . . . . . . 70
5.1.2 Probability Distributions for the Stochastic Model . . 71
5.1.3 Generating Functions and Cumulants . . . . . . . . 72
5.1.4 Application to AHB Population Dynamics . . . . . . 73
5.2 Nonlinear Birth-Immigration-Death Models . . . . . . . . . 75
5.2.1 Deterministic Model . . . . . . . . . . . . . . . . . 75
5.2.2 Simulation of the Stochastic Model . . . . . . . . . 76
5.2.3 Application to AHB Population Dynamics . . . . . . 76
5.2.4 Summary of Single Population Models . . . . . . . . 77

Table of Contents Copyright °


c 2005 by Thomas E. Wehrly Slide 0
Mathematics 669

6 Models for Multiple Populations 78


6.1 Compartmental Models . . . . . . . . . . . . . . . . . . . 78
6.1.1 The Deterministic Compartment Model . . . . . . . 79
6.1.2 Stochastic Compartmental Models . . . . . . . . . 80
6.2 Basic Methods for Two-Population Models . . . . . . . . . 82
6.2.1 A Birth-Immigration-Death-Migration Model . . . . . 82
6.3 Simulation of Predator-Prey Model . . . . . . . . . . . . . 84
6.4 Simulation of a Competition Model . . . . . . . . . . . . . 85

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 0
Mathematics 669

1 Probability and Random Variables

The models that you have seen thus far are deterministic models. For
any time t, there is a unique solution X(t). On the other hand,
stochastic models result in a distribution of possible values X(t) at a
time t. To understand the properties of stochastic models, we need to
use the language of probability and random variables.

1.1 The Basic Ideas of Probability

1.1.1 Sample Spaces and Events

Probability: Probability is used to make inferences about populations.

Experiment: Some process whose outcome is not known with certainty.

Sample Space: The collection of all possible outcomes of an


experiment or process; denoted S .

Event: Any collection of possible outcomes of an experiment; denoted


A, B, etc.
Relative Frequency Interpretation of Probability

A random experiment is carried out a large number (n) of times and the
number (n(A)) of times that event A occurs is recorded. Then the
proportion of times that A occurs will tend to the probability of A:

n(A)
−→ P (A)
n

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 1
Mathematics 669

Illustration of Long-Run Relative Frequency

Suppose a die is tossed repeatedly, and we count the number of times


that the toss results in six spots. We then plot the proportion of times
that the toss results in a six versus the number of tosses.

n(A)
n n(A) n
10 2 0.20000

100 23 0.23000

1000 160 0.16000

10000 1639 0.16390

100000 16618 0.16618

Relative Frequency of Tosses of Die Resulting in a Six


0.22
relative frequency

0.18
0.14

0 20000 40000 60000 80000 100000

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 2
Mathematics 669

Axioms:

1. P (A) > 0 for any event A


2. P (S) = 1
3. For any collection A1 , A2 , . . . of mutually exclusive events
(Ai ∩ Aj = ∅ for i 6= j ),


X
P (A1 ∪ A2 ∪ · · ·) = P (Ai )
i=1

Properties:

• 0 ≤ P (A) ≤ 1

• P (∅) = 0

• Probability an event does not occur: P (A0 ) = 1 − P (A).

• P (A ∪ B) = P (A) + P (B) − P (A ∩ B)

• If A and B are mutually exclusive,


P (A ∪ B) = P (A) + P (B)

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 3
Mathematics 669

1.1.2 Conditional Probability

For any two events A and B with P (B) > 0 the conditional probability
of A given that B has occurred:

P (A ∩ B)
P (A|B) =
P (B)

The multiplication rule for P (A ∩ B) is:

P (A ∩ B) = P (A|B)P (B)
P (A ∩ B) = P (B|A)P (A)

Law of Total Probability

Let A1 , . . . , An be mutually exclusive and exhaustive events.


Exhaustive means that

A1 ∪ A2 ∪ · · · ∪ An = S.

Assume also that P (Aj ) > 0 for each j . Then for any event B ,
n
X
P (B) = P (B|Aj )P (Aj )
j=1

If P (B) > 0, this law implies Bayes Theorem:

P (B|Ak )P (Ak )
P (Ak |B) = Pn
j=1 P (B|Aj )P (Aj )

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 4
Mathematics 669

Example: Diagnostic Testing.

Define the two events:

A= event that disease is present


B = event that diagnostic test is positive
We usually know the following:

• Prevalence of disease, say P (A) = .001


• Sensitivity of test, say P (B|A) = 0.95
• Specificity of test, say P (B 0 |A0 ) = 0.90
We want to know, P (A|B) or P (A0 |B 0 )

Solution:

P (B|A)P (A)
P (A|B) = P (B|A)P (A)+P (B|A0 )P (A0 )
(0.95)(0.001)
= (0.95)(0.001)+(1−0.90)(1−0.001)
= 0.0094

0 0 P (B 0 |A0 )P (A0 )
P (A |B ) = P (B 0 |A0 )P (A0 )+P (B 0 |A)P (A)
(0.90)(0.999)
= (0.90)(0.999)+(1−0.95)(0.001)
= 0.9999444

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 5
Mathematics 669

1.1.3 Independence

Two events A and B are independent if

P (A ∩ B) = P (A)P (B).

They are dependent otherwise.

When P (A) > 0 and P (B) > 0, this definition is equivalent to

P (A|B) = P (A) and P (B|A) = P (B)


.

Extension of Independence to Several Events

We say that A1 , A2 , . . . , An are mutually independent if for every


subset {i1 , . . . , ik } (k ≥ 2), we have

P (Ai1 ∩ Ai2 ∩ · · · ∩ Aik ) = P (Ai1 )P (Ai2 ) · · · P (Aik )

We say that A1 , A2 , . . . , An are pairwise independent if

P (Ai ∩ Aj ) = P (Ai )P (Aj )

for every pair (i, j), i 6= j .

Pairwise independence does not imply mutual independence.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 6
Mathematics 669

1.2 Random Variables

Random variables help us to make a link between probability and


numbers that we observe as data.

A random variable (rv) is a numerical valued function defined on a


sample space. A random variable X “maps” an outcome in a sample
space to a numerical value.

The probability that a rv X takes a value in the set A is given by

P [X ∈ A] = P [X −1 (A)].

We use capital letters such as X or Y to denote random variables.

Let s be an elementary outcome. A value, X(s), of X is denoted x.

A random variable is discrete if it can take on a finite or countable


number of values.

A continuous random variable takes on an uncountable number of


values.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 7
Mathematics 669

1.3 Probability Distributions of a Discrete R.V.

The probability distribution of a discrete r.v. is a list of the distinct values


x of X together with the associated probabilities:

p(x) = P (X = x)

By P (X = x), we mean P (Ax ) where

Ax = {s ∈ S : X(s) = x}.

We can express p(x) as a function or in a table:

x x1 x2 x3 ... xk
p(x) p(x1 ) p(x2 ) p(x3 ) ... p(xk )

A function p(x) or px is a probability mass function (pmf) of some


random variable X if

• p(x) ≥ 0 all x
P
• all xi p(xi ) = 1

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 8
Mathematics 669

An alternative way to represent a probability distribution is by using the


cumulative distribution function (cdf):
X
F (x) = P (X ≤ x) = p(y), −∞ < x < ∞
y:y≤x

For a discrete random variable taking values on x1 < x2 < · · · < xk ,

p(xj ) = F (xj ) − F (xj−1 ), j = 2, . . . , k.

1.3.1 Parameters of Probability Distributions

Suppose that for each value of α, p(x; α) is a probability distribution for


a random variable X . Then α is said to be a parameter of the
distribution. The collection of distributions

{p(x; α) : α ∈ A}

is called a parametric family of distributions.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 9
Mathematics 669

1.3.2 Expected Values of Discrete RV

• Mean of a discrete RV

The mean of a rv X is
X
E[X] = µ = x · p(x)
x∈D

where D is the set of possible values of X .

• Expected value of a function of X

The expected value of a function h(X) is:

X
E[h(X)] = µh(X) = h(x) · p(x)
x∈D

If h(X) is a linear function of the form aX + b:

E(aX + b) = aE(X) + b

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 10
Mathematics 669

• Variance of a discrete R.V.

The variance of a discrete R.V. is

V (X) = σ2
2
= σX
= E[(X − µ)2 ]

• The standard deviation of X is

p √
σ = σX = V (X) = σ 2 = SD(X)

• The variance of a linear function aX + b is

V (aX + b) = a2 V (X) = a2 σ 2

• Implications:

– V (aX) = a2 V (X)

– SD(aX) = |a|SD(X)

– V (X + b) = V (X)

– SD(X + b) = SD(X)

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 11
Mathematics 669

1.4 Continuous Random Variables

A continuous random variable can assume any value in an interval on


the real line. The distribution of a continuous random variable is
determined by the probability density function (pdf). The pdf of X is a
function f (x) such that for any numbers a and b where a < b,
Z b
P (a ≤ X ≤ b) = f (x)dx
a

The graph of f (x) is often called a density curve.

Example of a PDF
0.20
0.15
0.10
pdf

0.05
0.00

0.0 0.5 1.0 1.5 2.0

For f (x) to be a pdf it must satisfy:

1.f (x) ≥ 0 all x


R∞
2. −∞ f (x)dx = 1 (area under curve is 1).

An alternative method of expressing the distribution of a continuous

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 12
Mathematics 669

random variable is using the cumulative distribution function (cdf). The


cdf of a continuous RV is defined as:
Z x
F (x) = P (X ≤ x) = f (y)dy
−∞

Example of a PDF Example of a CDF


0.20

1.0
0.8
0.15

0.6
0.10
pdf

cdf

0.4
0.05

0.2
0.00

0.0

0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5

x x

Useful Properties:

• P (a ≤ X ≤ b) = F (b) − F (a)
• If X is a continuous RV with pdf f (x) and cdf F (x), then at every
x at which F 0 (x) exists:

F 0 (x) = f (x)

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 13
Mathematics 669

1.4.1 Percentiles

For 0 ≤ p ≤ 1 the (100p)th percentile of the distribution of a


continuous RV X is a value xp such that p = F (xp )

1.4.2 Expected Values, Mean and Variance

The expected value of a function h(X) for a continuous rv is:


Z ∞
E[h(X)] = h(x) · f (x)dx
−∞

Some special cases:


R∞
Mean: E[X] = µ = −∞
x · f (x)dx

2 2
R∞
Variance: E[(X − µ) ] = σ = −∞
(x − µ)2 · f (x)dx

Remember: E[(X − µ)2 ] = E[X 2 ] − (E[X])2 = σ 2

Note: The properties of expectation and variance of linear functions also


hold in the continuous case.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 14
Mathematics 669

1.5 Joint Probability Distributions

The joint cdf of two random variables X and Y is defined by

F (x, y) = P [X ≤ x, Y ≤ y].

We say that (X, Y ) is discrete if

X
P [(X, Y ) ∈ A] = p(x, y)
(x,y)∈A

where p(x, y) = P [X = x, Y = y] is the joint pmf of (X, Y ).

We say that (X, Y ) are jointly continuous rvs if there exists a function
called the joint pdf such that

ZZ
P [(X, Y ) ∈ A] = f (x, y)dxdy
A

The expectation of a function h(X, Y ) of (X, Y ) is

 Z ∞Z ∞

 h(x, y)f (x, y)dxdy if X, Y continuous

X−∞X−∞
E[h(X, Y )] =

 h(x, y)p(x, y) if X, Y discrete

x y

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 15
Mathematics 669

The random variables X and Y are independent if

P [X ∈ A, Y ∈ B] = P [X ∈ A] × P [Y ∈ B]

for any events A and B . This is equivalent to

F (x, y) = FX (x)FY (y), for all x, y for any rvs

f (x, y) = fX (x)fY (y), for all x, y for continuous rvs

p(x, y) = pX (x)pY (y), for all x, y for discrete rvs

1.5.1 Conditional Distributions

Conditional distributions are a basic tool in the description of stochastic


processes.

The conditional probability mass function of Y given that X = x0 is

fXY (x0 , y)
fY |x0 (y) = , for fX (x0 ) > 0.
fX (x0 )

Similarly, the conditional probability mass function of X given that


Y = y0 is

fXY (x, y0 )
fX|y0 (x) = , for fY (y0 ) > 0.
fY (y0 )

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 16
Mathematics 669

Properties of the Conditional PMF

• fY |x0 (y) > 0


P
• y fY |x0 (y) = 1
• P (Y = y|X = x0 ) = fY |x0 (y)
• We can find expectations using conditional pmfs.

Conditional Mean and Variance:

P
• E(Y |x) = µY |x = y yfY |x (y)
P
• V (Y |x) = σY2 |x = y (y − µ)2 fY |x (y)

1.5.2 Conditional Distributions for Bivariate Continuous RVs

Given that (X, Y ) are continuous rvs with pdf fXY (x, y), the
conditional pdf of Y given that X = x0 is

fXY (x0 , y)
fy|x0 (y) = for fX (x0 ) > 0.
fX (x0 )

Properties:

• fy|x0 (y) ≥ 0
R∞
• −∞ fy|x0 (y)dy = 1
R
• P (Y ∈ B|X = x0 ) = B
fy|x0 (y)dy = 1

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 17
Mathematics 669

Conditional Mean and Variance:

R∞
• E(Y |X = x0 ) = µY |x0 = −∞
yfy|x0 (y)dy

R∞
• V (Y |X = x0 ) = σY2 |x0 = −∞
(y − µY |x0 )2 fy|x0 (y)dy
Remark: The conditional mean E(Y |X = x0 ) is known as the
regression function of Y on x.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 18
Mathematics 669

1.6 Some Special Cases

1.6.1 Poisson Distribution

Consider these random variables:

• Number of telephone calls received per hour.


• Number of days school is closed due to snow.
• Number of trees in an area of forest.
• Number of bacteria in a culture.
A random variable X , the number of events occurring during a given
time interval or in a specified region, is called a Poisson random variable.
The corresponding distribution:

X ∼ Poisson(λ)

where λ is the rate per unit time or rate per unit area.

e−λ λx
p(x; λ) = P (X = x) = , x = 0, 1, 2, . . . , λ > 0
x!
The mean and variance of a Poisson random variable are

E[X] = µ =λ
V [X] = σ2 =λ

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 19
Mathematics 669

1.6.2 The Poisson Process

We will be examining various stochastic processes that correspond to


some of the deterministic population models studied so far.

A stochastic processes {X(t), t ∈ T } is an indexed collection of


random variables. We are interested in several properties of a stochastic
process:

• The distribution of X(t) for a fixed time t.


• The joint distribution of (X(t1 ), X(t2 ), . . . , X(tk )) for any times
t1 , . . . , tk .
• The appearance of a sample path or realization of the stochastic
process: {X(t; s) : t ∈ T }.

We will show how the Poisson distribution arises in a stochastic process


for which we make a few reasonable assumptions. We first define a
counting process.

A stochastic process {X(t), t ≥ 0} is said to be a counting process if


1. X(t) ≥ 0
2. X(t) is integer valued
3. If s < t, then X(s) ≤ X(t).
4. For s < t, X(t) − X(s) equals the number of events that have
occurred in the interval (s, t].

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 20
Mathematics 669

Let {X(t), t ≥ 0} be a counting process that satisfies


1. X(0) = 0
2. X(s) is independent of X(t + s) − X(s) for any s, t > 0
(independent increments).

3. The distribution of X(t + s) − X(s) depends only on t for any


s, t > 0 (stationary increments).
4. P (X(t + h) − X(t) = 1) = λh + o(h)
5. P (X(t + h) − X(t) ≥ 2) = o(h)
Then we can show that

e−λt (λt)x
Px (t) = P (X(t) = x) = , x = 0, 1, 2, . . .
x!
The process {X(t), t ≥ 0} is called a Poisson process.
Outline of Proof

Consider

P0 (t + h) = P0 (t)P0 (h) = P0 (t)(1 − λh) + o(h).

Then
P0 (t + h) − P0 (t) o(h)
= −λP0 (t) + .
h h
Let h → 0 and obtain P00 (t) = −λP0 (t). This implies that
P0 (t) = eλt .

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 21
Mathematics 669

For x ≥ 1,

Px (t + h) = P [X(t) = x, X(t + h) − X(t) = 0]


+ P [X(t) = x − 1, X(t + h) − X(t) = 1]
+ P [X(t + h) = x, X(t + h) − X(t) ≥ 2]
= Px (t)P0 (h) + Px−1 (t)P1 (h) + o(h)
= (1 − λh)Px (t) + λhPx−1 (t) + o(h)

Divide both sides by h and let h → 0:

Px0 (t) = −λPx (t) + λPx−1 (t), x = 1, 2, . . . .

The solution to this system of differential equations is

e−λt (λt)x
Px (t) = , x = 1, 2, . . . .
x!

Figures: The first figure on the next page illustrates the variation in a
Poisson process with λ = 1 for various times. The red bars represent
the pmf of the Poisson process at t = 1, 2, . . . , 10.

The second figure depicts 10 realizations of a Poisson process with


λ = 1.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 22
Mathematics 669

Poisson Process

20
15
10
y

5
0

0 2 4 6 8 10

Ten Realizations of a Poisson Process


150
100
count

50
0

0 20 40 60 80 100

time

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 23
Mathematics 669

1.6.3 Normal Distribution

The normal or Gaussian distribution has the pdf:

1 2 2
f (x; µ, σ) = √ e−(x−µ) /2σ −∞<x<∞
2π σ

The mean and variance are

E(X) = µ
V (X) = σ2

The shorthand for this family of distributions as:

X ∼ N (µ, σ 2 )

Some Normal Distributions

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 24
Mathematics 669

1.0 Normal Distributions with Different Means

mu=-2
mu=0
0.8

mu=2
0.6
density

0.4
0.2
0.0

-4 -2 0 2 4

Normal Distributions with Different Variances


1.0

sigma=1
sigma=0.5
0.8

sigma=2
0.6
y

0.4
0.2
0.0

-4 -2 0 2 4

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 25
Mathematics 669

1.7 Gamma Distribution

The gamma distribution is a family of distributions that yields a wide


variety of skewed distributions. It is often used to model the lifetime
length of manufactured items.

Central to the gamma distribution is the gamma function:


Z ∞
Γ(α) = xα−1 e−x dx α>0
0

Some properties of the gamma function:

1. α > 1, Γ(α) = (α − 1)Γ(α − 1).


2. If n is positive integer: Γ(n) = (n − 1)!

3. Γ( 21 ) = π.

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 26
Mathematics 669

Using the properties of the gamma function, we obtain the pdf of the
gamma (α, β) distribution:

1 α−1 −x/β
f (x; α, β) = x e x ≥ 0, α > 0, β > 0
β α Γ(α)

The mean and variance of the gamma distribution are:

E(X) = αβ
V (X) = αβ 2

• α is the shape parameter and β is the scale parameter.

• If β = 1 then we call this the standard gamma distribution.


• If α = 1, the distribution is the exponential distribution.
Letting λ = 1/β , the pdf of the exponential distribution is
given by

f (x; λ) = λe−λx x ≥ 0, λ > 0

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 27
Mathematics 669

Standard Gamma Density Curves


1.0

alpha=1
alpha=2
alpha=3
alpha=4
0.8

alpha=5
alpha=.5
0.6
f(x)

0.4
0.2
0.0

0 2 4 6 8 10

Gamma Density Curves with Different Scales, Alpha=2


1.0

beta=1
beta=2
beta=4
beta=8
0.8

beta=.5
0.6
f(x)

0.4
0.2
0.0

0 2 4 6 8 10

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 28
Mathematics 669

1.7.1 Distribution of Elapsed Time in the Poisson Process

Recall the Poisson distribution and how it is used to calculate


probabilities for certain events in time or space:

Let T1 denote the time of the first event and Tn , n = 2, 3, . . . be the


time between the (n − 1)st and nth events. Then (T1 , T2 , . . .) are
independent and identically distributed (iid) exponential(λ) random
variables.

Let Sn = T1 + · · · + Tn . Then Sn has a gamma(1/λ, n) distribution.


This can be noted by the fact that

Sn ≤ t ⇔ X(t) ≥ n.

Hence,


X j
−λt (λt)
P [Sn ≤ t] = P [X(t) ≥ n] = e .
j=n
j!

We differentiate this to get the pdf of Sn :

λn
f (t) = tn−1 e−λt , t > 0.
(n − 1)!

Chapter 1: Introduction to Probability and Random Variables Copyright °


c 2005 by Thomas E. Wehrly Slide 29
Mathematics 669

2 An Introduction to Stochastic Population


Models

References

[1] J. H. Matis and T. R. Kiffe. Stochastic Population Models, a


Compartmental Perspective. Springer-Verlag, New York, 2000.

[2] E. Renshaw. Modelling Biological Populations in Space and Time.


Cambridge University Press, Cambridge, 1991.

2.1 Some Ecological Examples

1. Spread of muskrats in the Netherlands

2. Invasion by Africanized honey bee

3. Infestations of honey bees by varroa mites

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 30
Mathematics 669

2.2 A Quick Contrast Between Deterministic and


Stochastic Models

We consider the linear birth-death model where each individual gives


birth at rate a1 and dies at rate a2 . We let X(t) be the number of
individuals in the population at time t.

2.2.1 Deterministic Model

The differential equation for the deterministic model is

dX(t)
= (a1 − a2 )X(t),
dt
with solution

X(t) = X(0)e(a1 −at )t .

The deterministic model results in either exponential growth or


exponential decay of the population.

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 31
Mathematics 669

2.2.2 Stochastic Model

For the stochastic model, we make assumptions concerning events in a


small time interval of (t, t + ∆t) of length ∆t. We suppose that each
individual gives birth with probability a1 ∆t and dies with probability
a2 ∆t. This leads to the assumptions:

P (X(t + ∆t) = x + 1|X(t) = x) = a1 x∆t + o(∆t)


P (X(t + ∆t) = x − 1|X(t) = x) = a2 x∆t + o(∆t)
P (X(t + ∆t) = x|X(t) = x) = 1 − (a1 + a2 )x∆t + o(∆t)

We let px (t) = P [X(t) = x]. The above assumptions imply that

px (t + ∆t) = px (t)[1 − (a1 + a2 )x∆t] + px−1 (t)(x − 1)a1 ∆t


+ px+1 (t)(x + 1)a2 ∆t

since X(t) = x can be reached from X(t) = x − 1, x, x + 1 in a


small time interval. Letting ∆t → 0, we obtain the system of differential
equations, called the Kolmogorov forward equations, for px (t):

ṗx (t) = a1 (x−1)px−1 (t)−[(a1 +a2 )x]px (t)+a2 (x+1)px+1 (t)

for x > 0 and


ṗ0 (t) = a2 p1 (t)

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 32
Mathematics 669

The solution to these equations can be solved using standard differential


equation techniques. We now focus on the stochastic aspects.

Since the individuals behave independently, we can view the population


as comprising X0 separate populations, each of size 1. When X0 = 1,
the population size X(t) follows a geometric distribution with pmf

p0 (t) = α(t)
px (t) = [1 − α(t)][1 − β(t)][β(t)]x−1 , x = 1, 2, . . .

where
a2 (e(a1 −a2 )t −1)
α(t) = a1 e(a1 −a2 )t −a2
a1 (e(a1 −a2 )t −1)
β(t) = a1 e(a1 −a2 )t −a2

Standard results for the geometric distribution yield the mean and
variance functions for the population size:

E[X(t)] = X0 e(a1 −a2 )t


h i
a1 +a2
V [X(t)] = X0 a1 −a2 e(a1 −a2 )t (e(a1 −a2 )t − 1)

• The mean function agrees with the deterministic model, however the
variance function depends on the magnitudes of the birth and death
rates as well as on their difference.

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 33
Mathematics 669

• For the deterministic model, there is exponential growth if the birth


rate exceeds the death rate. However, for the stochastic model,
there is a probability of extinction even in this case:

p0 (t) = α(t)X0

• We now look at the probability of ultimate extinction:


– If a1 < a2 , p0 (∞) = 1
– If a1 > a2 , p0 (∞) = (a2 /a1 )X0 .
– If a1 = a2 , p0 (t) = [a2 t/(1 + a2 t)]X0 → 1 as t → ∞.

• It is easy to simulate the stochastic model. By examining sample


paths, one can see how single realizations of a process can give
misleading results.

– One can show that the time between events is exponentially


distributed with parameter R(X) = (a1 + a2 )X(t).
– A birth occurs at this time with probability a1 /(a1 + b1 ).
– Otherwise, a death occurs.

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 34
Mathematics 669

• Simulation with a1 = 5, a2 = 1, X0 = 10

70
70

60
60

50
50

40
sp

sp
40

30
30

20
20

10
10

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5

time time
80

80
60

60
sp

sp
40

40
20

20

0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5

time time
80
70
60

60
50
sp

sp
40

40
30
20

20
10

0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4

time time
80

60
50
60

40
sp

sp
40

30
20
20

10

0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6

time time

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 35
Mathematics 669

• Simulation with a1 = 1, a2 = 5, X0 = 10
10

10
8

8
6

6
sp

sp
4

4
2

2
0

0
0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5 2.0

time time
10

10
8

8
6

6
sp

sp
4

4
2

2
0

0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0

time time
10

10
8

8
6

6
sp

sp
4

4
2

2
0

0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0

time time
10

10
8

8
6

6
sp

sp
4

4
2

2
0

0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5

time time

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 36
Mathematics 669

• Simulation with a1 = 5, a2 = 5, X0 = 10

12
20

10
8
15
sp

sp

6
4
10

2
5

0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.5 1.0 1.5

time time
15

15
10
sp

sp

10
5

0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2

time time
20

10
18

8
16

6
sp

sp
14

4
12

2
10

0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 1.2

time time
24
22
20

20
18
15
sp

sp

16
14
10

12
10
5

0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6

time time

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 37
Mathematics 669

• Simulation with a1 = 5, a2 = 4, X0 = 5

6
25

5
20

4
15
sp

sp

3
2
10

1
5

0
0.0 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5

time time

30
6

25
20
4
sp

sp

15
2

10
0

0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6

time time
30
20

25
15

20
sp

sp
10

15
10
5

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8

time time
6

6
4

4
sp

sp
2

2
0

0.0 0.5 1.0 1.5 2.0 0 1 2 3

time time

Chapter 2: Introduction to Stochastic Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 38
Mathematics 669

3 Basic Methods for Single Population


Models

Basic notation:

• X(t) = the random population size at time t

• px (t) = Prob[X(t) = x], the probability that the random


population size equals x at time t

• p(t) = [p0 (t), p1 (t), . . . , px (t), . . .], the probability distribution of


X(t)

Goal: Solve for p(t) for any t > 0 based on simple assumptions
concerning X(t).

Once we know p(t), we (in theory) know all the properties of X(t).

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 39
Mathematics 669

Example: Immigration-Death Model

X(t) = the number of insects in a field at a given time


Assume X(0) = 0.
1. P {X will increase by 1 unit due to immigration} = I∆t
2. P {X will decrease by 1 unit due to death} = µX ∆t where the
death rate is linear, µX = aX .

We will show later that this results in X(t) being a Poisson random
variable with parameter

λ(t) = (1 − e−at )I/a.

The Corresponding Deterministic Model:

Letting the derivative of X(t) be Ẋ(t), the model is

Ẋ(t) = I − aX(t).

This has solution

X(t) = (1 − e−at )I/a

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 40
Mathematics 669

3.1 Moments of X(t)

Moments are means of powers of X(t).

The mean or first moment of X(t) is


X
µ1 (t) = E[X(t)] = xpx (t)
x=0

The ith moment of X(t) is


X
µi (t) = E[(X(t))i ] = xi px (t)
x=0

Special Case: A Poisson random variable X has probability mass


function

e−λ λx
p(x; λ) = P (X = x) = , x = 0, 1, 2, . . . , λ > 0
x!

P∞ λx e −λ P∞ e λ −λ x
E[X] = x=0 x
x! = x=1 x x!
P∞ x e−λ λλx−1 P∞ e−λ λx−1
= x=1 x (x−1)! = λ x=1 (x−1)!
P∞ e−λ λy
= λ y=0 y! = λ

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 41
Mathematics 669

3.2 Simulation of the Stochastic Process

• It is easy to simulate these basic stochastic processes by using a


random number generator to obtain random variables representing
the times between arrivals and the times between deaths.

• The times between arrivals will have an exponential distribution with


parameter I (mean 1/I ).

• The times between deaths will have an exponential distribution with


parameter µX = aX(t).

• An algorithm for simulation of the process can be summarized as


follows:

1. Set X(0) = 0.
2. Generate t1 from an exp(I) distribution. Set X(t1 ) = 1.
3. If X(ti ) = 0, generate tI from an exp(I) distribution. Set
ti = ti−1 + tI and X(ti ) = 1.
4. Otherwise, generate tI from an exp(I) distribution and tD from
exp(aX(ti )) distribution.
– If tI < tD set ti = ti−1 + tI and X(ti ) = X(ti−1 ) + 1.
– If tI > tD set ti = ti−1 + tD and X(ti ) = X(ti−1 ) − 1.

5. Return to Step 3.

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 42
Mathematics 669

A simpler algorithm is the following:

1. Generate t∗ from an exp(I + aX(t)) distribution. Set


ti = ti−1 + t∗ .
2. Generate U from a Uniform(0, 1) distribution.
– If U< I/(I + aX(t)), set X(ti ) = X(ti−1 ) + 1.
– Otherwise, set X(ti ) = X(ti−1 ) − 1.

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 43
Mathematics 669

Example: Let X(t) denote the number of corn earworms in a field at


time t. The immigration rate is I = 10 insects per day and the
departure (death) rate is µX = 0.1X per day. The process was
generated four times using these parameters. The solid line is the mean
function (or deterministic curve).
120
100
80
sp

60
40
20
0

0 10 20 30 40 50

time

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 44
Mathematics 669

3.3 Kolmogorov Differential Equations for Probability


Functions

A standard approach to solving for p(t) is using the Kolmogorov


differential equations. This approach makes use of assumptions
concerning the probabilities of various events occuring in a small interval
of length ∆t.

Suppose that X(t + ∆t) = x. There are the following possibilities for
the way this could occur starting at time t:

1. X(t) = x with no change from t to t + ∆t


2. X(t) = x − 1 with only a single immigration in ∆t
3. X(t) = x + 1 with only a single death in ∆t
4. Other possibilities involving two or more independent changes in ∆t

These assumptions yield the expression for the probability that


X(t + ∆t) = x:

px (t + ∆t) = P [X(t + ∆t) = x|X(t) = x]P [X(t) = x]


+ P [X(t + ∆t) = x|X(t) = x + 1]P [X(t) = x + 1]
+ P [X(t + ∆t) = x|X(t) = x − 1]P [X(t) = x − 1]
+ P [X(t + ∆t) = x|X(t) 6= x, x − 1, x + 1]
× P [X(t) 6= x, x − 1, x + 1]
= px (t)[1 − I∆t − ax∆t] + px+1 (t)[a(x + 1)∆t]
+ px−1 (t)[I∆t] + o(∆t)

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 45
Mathematics 669

We subtract px (t), divide by ∆t, and then take the limit as ∆t → 0:

ṗx (t) = Ipx−1 (t) − (I + ax)px (t) + a(x + 1)px+1 (t) for x > 0

and
ṗ0 (t) = −Ip0 (t) + ap1 (t)

The solution to this set of differential equations is the Poisson distribution


with mean λ(t) = (1 − e−at )I/a.

In matrix form, the Kolmogorov equations can be written in the form

ṗ(t) = p(t)R,
where R is a tridiagonal matrix. For our immigration-death model, the R
matrix is infinite with elements for i, j ≥0



 ri,i+1 = I



 r
i,i−1 = ai
ri,j =

 ri,i = −(ai + I)



 r =0
i,j for |i − j| > 1.

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 46
Mathematics 669

3.4 Generating Functions

Generating functions are useful tools for finding the population size
distribution and moments of this distribution.

Suppose that X(t) is a discrete random variable with probability mass


function px (t), x = 0, 1, 2, . . . .

• The probability generating function (pgf) is defined as


X
P (s, t) = sx px (t).
x=0

Probabilities can be obtained by differentiating P (s, t).

• The moment generating function (mgf) of X(t) is defined as


X
M (θ, t) = eθx px (t).
x=0

One can show that



X
M (θ, t) = µi (t)θi /i!
i=0

Thus, one can find the ith moment of X(t) by differentiating the
mgf with respect to θ .

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 47
Mathematics 669

• The cumulant generating function (cgf) is

K(θ, t) = log(M (θ, t))

with power series expansion



X
K(θ, t) = κi (t)θi /i!
i=0

The quantity κi (t) is called the ith cumulant of X(t). Cumulants


can be obtain by differentiating the cgf.

The cumulants are related to the moments:

κ1 (t) = E(X(t)) = µ1 (t)


κ2 (t) = E[(X(t) − µ1 (t))2 ] = V (X(t)) = µ2 (t) − [µ1 (t)]2
κ3 (t) = E[(X(t) − µ1 (t))3 ] = µ3 (t) − 3µ2 (t)µ1 (t) − 2µ1 (t)2
.. ..
. .

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 48
Mathematics 669

Example: The population size random variable has the Poisson


distribution with parameter λ(t).

The pmf is

e−λ(t) λ(t)x
px (t) = , x = 0, 1, 2, . . .
x!

• The pgf is

X
−λ(t) x X∞
x e λ(t) −λ(t) [sλ(t)]x
P (s, t) = s =e = e(s−1)λ(t)
x=0
x! x=0
x!

• The mgf is
θ
M (θ, t) = P (eθ , t) = e(e −1)λ(t)

• The cgf is

K(θ, t) = log(M (θ, t)) = (eθ − 1)λ(t)

• The cumulants can be found by differentiation to be

κi (t) = λ(t), for all i

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 49
Mathematics 669

3.5 PDEs for Cumulant Generating Functions

For many models it is more practical to form a system of PDEs for the
generating functions rather than for the probabilities. We multiply the
expression for ṗx (t) by sx and sum over x:

X X X X
x x x
s ṗx = I s px−1 − (I +ax)s px +a (x+1)sx px+1

The left hand side is ∂P (s, t)/∂t. Using the additional result that

∂P (s, t) X x−1
= xs px (t)
∂s
we obtain

∂P (s, t)
= I(s − 1)P (s, t) + a(1 − s)∂P (s, t)/∂s
∂t

The initial condition corresponding to X(0) = 0 is P (s, 0) = 1. The


solution to this linear PDE is

P (s, t) = exp{(s − 1)(1 − e−at )I/a}

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 50
Mathematics 669

The “random variable technique” in Bailey’s classic book on stochastic


processes enables one to directly write down the PDEs for the
generating functions for birth-death-migration models.

Let the possible changes in population size X(t) from t to t + ∆t be


denoted as

P [X(t) changes by j units] = fj (X)∆t + o(∆t)

For our immigration death model, the possible changes (or intensity
functions) are

f1 = I and f−1 = ax

For intensity functions of the form


X
f (x) = ak xk

we define the operator notation:


¡ ∂¢ P k
f s ∂s P = ak sk ∂∂sPk and
¡∂¢ P k
f ∂θ M = ak ∂∂θM
k

Bailey provides the following operator equations for the pgf and mgf:

∂P
P j ∂
¡ ¢
∂t = j6=0 (s − 1) fj s ∂s P (s, t)
∂M
P jθ
¡∂¢
∂t = j6=0 (e − 1) fj ∂θ M (θ, t)

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 51
Mathematics 669

Example: Immigration-Death Process

For our immigration death model, the possible changes (or intensity
functions) are

f1 = I and f−1 = ax

Thus,

¡ ∂¢ 0
f1 s ∂s P (s, t) = Is0 ∂∂sP0 = IP (s, t)
¡ ∂¢
f−1 s ∂s P (s, t) = as ∂P
∂s
Hence,

∂P (s, t) ∂P (s, t)
= I(s − 1)P (s, t) + (s−1 − 1)as
∂t ∂s

Also,
¡ ∂
¢ 0
f1 ∂θ M (θ, t) = I ∂∂θM
0 = IM (θ, t)
¡∂¢
f−1 ∂θ M (θ, t) = a ∂M
∂θ

We end up with
∂M ∂M
= I(eθ − 1)M + a(e−θ − 1) .
∂t ∂θ
With the boundary condition M (θ, 0) = 1, the solution is

M (θ, t) = exp{(eθ − 1)(1 − e−at )I/a}

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 52
Mathematics 669

We can often find simpler PDEs for the cgf and use this to find ODEs for
the cumulants:

K(θ, t) = log M (θ, t).

Thus, for this model

∂K ∂K
= I(eθ − 1) + a(e−θ − 1) .
∂t ∂θ

Using the series expansion of K and equating powers of θ , we obtain

κ̇1 (t) = I − aκ1 (t)


κ̇2 (t) = I + aκ1 (t) − 2aκ2 (t)
κ̇3 (t) = I − aκ1 (t) + 3aκ2 (t) − 3aκ3 (t)

With the initial conditions κ1 (0) = κ2 (0) = κ3 (0) = 0, the solution is

κ1 (t) = κ2 (t) = κ3 (t) = (1 − e−at )I/a

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 53
Mathematics 669

This approach will prove useful for more complex models. We


summarize it as follows:

1. Use model assumptions to formulate intensity functions fj .

2. Use the operator equations to obtain the PDEs for the moment
generating function.

3. Transform these to PDEs for the cumulant generating function.

4. Use a series expansion to obtain differential equations for the


cumulants.

5. Solve the differential equations for the cumulants.

Chapter 3: Stochastic Models for Single Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 54
Mathematics 669

4 Some Linear One-Population Models

4.1 Linear Immigration-Death Models

We consider models for a population of size X(t) with linear death rate

µX = aX

and immigration rate I . We will relax the assumption on initial population


size.

4.1.1 Deterministic Model

The deterministic model is

Ẋ(t) = −aX + I.

The solution with initial value X(0) = X0 is

X(t) = X0 e−at + (1 − e−at )I/a.

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 55
Mathematics 669

Example: Let X(t) = the number of Africanized honey bee (AHB)


colonies at time t in a given region. Suppose the following assumed
parameters:

I = 1.4 colonies/time
a = 0.08 (time−1 )
X(0) = 2 colonies
The deterministic solution is

X(t) = 17.5 − 15.5e−0.08t .


20
15
number of colonies

10
5
0

0 10 20 30 40 50 60

time

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 56
Mathematics 669

4.1.2 Stochastic Model

Earlier we found the PDE for the pgf

∂P (s, t) ∂P (s, t)
= I(s − 1)P (s, t) + a(1 − s)
∂t ∂s

The solution corresponding to X(0) = X0 is

P (s, t) = [1 + (s − 1)e−at ]X0 exp{(s − 1)(1 − e−at )I/a}

The pgf (or mgf) can be used to determine various properties of the
probability distribution of X(t).

• Consider the limiting distribution of the equilibrium population size


X ∗ as t → ∞. Since a > 0, the pgf of X ∗ is

P (s, ∞) = exp{(s − 1)I/a}.

This is the pgf of the Poisson distribution with parameter λ = I/a.


The limiting distribution is independent of the initial population size,
X0 .

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 57
Mathematics 669

• Note that the pgf is the product of two factors.


– The first factor is the pgf of a binomial distribution with n = X0
and p = e−at .
– The second factor is the pgf of a Poisson distribution with
parameter λ = (1 − e−at )I/a.
• This implies that we can write

X(t) = X1 (t) + X2 (t)

where X1 (t) and X2 (t) are independent random variables with the
above binomial and Poisson distributions, respectively.

• The moment generating function is

M (θ, t) = P (eθ , t)

The cumulant generating function then is

K(θ, t) = (eθ − 1)(1 − e−at )I/a + X0 log[1 + (eθ − 1)e−at ]

The first three cumulants are

µ(t) = X0 e−at + (1 − e−at )I/a


σ 2 (t) = µ(t) − X0 e−2at
κ3 (t) = σ 2 (t) − 2X0 e−2at (1 − e−at )

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 58
Mathematics 669

4.1.3 Application to the AHB Population Dynamics

We return to the AHB population dynamics example with

I = 1.4 colonies/time
a = 0.08 (time−1 )
X(0) = 2 colonies
The deterministic solution is

X(t) = 17.5 − 15.5e−0.08t .

• The equilibrium solution is X ∗ = 17.5. For the stochastic model,


the equilibrium solution X ∗ is now a Poisson random variable with
parameter I/a = 17.5. The probability distribution and its
saddlepoint approximation appear in the figure:
0.08
0.06
0.04
0.02
0.0

5 6 7 8 9 101112131415161718192021222324252627282930

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 59
Mathematics 669

• The transient probability distributions are also of interest. They could


be obtained directly from the pgf or mgf. To illustrate the variation in
the AHB model, we will simulate the process several times using the
assumed parameters. Notice the large amount of variation about the
mean function.

Simulations of AHB Population Dynamics


25
20
15
colonies

10
5
0

0 20 40 60

time

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 60
Mathematics 669

• Several more sample paths with the same parameter values:

30
20

25
15

20
sp

sp

15
10

10
5

5
0 20 40 60 80 0 20 40 60 80

time time
25

20
20

15
15
sp

sp

10
10

5
5

0 20 40 60 80 0 20 40 60 80

time time
20
20

15
15
sp

sp

10
10

5
5

0 20 40 60 80 0 20 40 60

time time
25

20
20

15
15
sp

sp

10
10

5
5

0 20 40 60 0 20 40 60

time time

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 61
Mathematics 669

4.2 Linear Birth-Immigration-Death Models

We now consider a process that has a linear birth rate in addition to the
linear death rate:

λX = a1 X and µX = a2 X

The immigration rate is assumed to equal I .

4.2.1 Solution to the Deterministic Model

The deterministic model can be written as

Ẋ(t) = aX(t) + I where a = a1 − a2 .

The solution is

X(t) = X0 eat + (eat − 1)I/a.

If a < 0, the equilibrium value is −I/a.

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 62
Mathematics 669

4.2.2 Probability Distributions for the Stochastic Model

The Kolmogorov forward equations are

ṗx (t) = [I+a1 (x−1)]px−1 (t)−[I+(a1 +a2 )x]px (t)+a2 (x+1)px+1 (t)

for x > 0 and


ṗ0 (t) = −Ip0 (t) + a2 p1 (t)

The R matrix is tridiagonal with elements



 ri,i+1 = I + ia1



 r
i,i−1 = ia2
ri,j =

 ri,i = −I − i(a1 + a2 )



 r =0
i,j for |i − j| > 1.

The equilibrium distribution can be derived from the Kolmogorov


equations by setting ṗ(t) = 0. Letting πi = pi (∞), we get

π1 = π0 (I/a1 )
π2 = π0 I(I + a1 )/2a22
.. ..
. .
¡ ¢
πi = π0 (a1 /a2 )i i−1+(I/a
i
1)

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 63
Mathematics 669

If a1 < a2 , we can solve for π0 by summing and setting the sum equal
to 1:

π0 = (−a/a2 )i/a1
We find that the distribution of X ∗ is the negative binomial distribution
with pmf

µ ¶
k−1+i k
πi = p (1 − p)i
i
where k = I/a1 and p = −a/a2 .

4.2.3 Generating Functions

The intensity functions are

f1 = I + a1 x and f−1 = a2 x.

The resulting PDE is

∂P
= I(s − 1)P (s, t) + [a1 s(1 − s) + a2 (1 − s)∂P (s, t)/∂s.
∂t
The analytical solution is

aI/a1 {a2 (eat − 1) − (a2 eat − a1 )s}X0


P (s, t) =
{(a1 eat − a2 ) − a1 s(eat − 1)}X0 +I/a1

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 64
Mathematics 669

It would be difficult to solve for the transition probabilities by successive


differentiation. However,

p0 (t) = P (0, t) = aI/a1 (a2 eat − a2 )X0 (a1 eat − a2 )−X0 −I/a1 .

The cumulant generating function is given by

∂K θ
© −θ −θ
ª ∂K
= I(e − 1) + a1 (e − 1) + a2 (e − 1) .
∂t ∂θ

Using a series expansion, we obtain ODEs for the first three cumulants:

κ̇1 (t) = I + aκ1


κ̇2 (t) = I + cκ1 + 2aκ2
κ̇3 (t) = I + aκ1 + 3cκ2 + 3aκ3

These can be solved recursively when X(0) = X0 :

κ1 (t) = X0 eat + (eat − 1)I/a


κ2 (t) = X0 ceat (eat − 1)/a + I(eat − 1)(a1 eat − a2 )/a2
£ ¤
κ3 (t) = X0 eat 3c2 (eat − 1)2 + a2 (e2at − 1) /2a2
£ 2 2 at 2at 2 3at
¤
+ −2ca2 + (3c − a )e − 6a1 ce + 4a1 e /2a3

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 65
Mathematics 669

4.2.4 Application to AHB

Consider the linear birth-death-immigration model with parameters;

I = 1.4 a1 = 0.08
X(0) = 2 a2 = 0.16
Since the negative net growth rate is a = −0.08, the solution to the
deterministic model is the same as the linear death-immigration process
with the same death rate.

However, there is a large difference between the stochastic models in


the two situations. We saw earlier that the equilibrium distribution of X ∗
was Poisson with mean 17.5 for the LID process. For the LBID model,
the equilibrium distribution is negative binomial with k = 17.5 and
p = 0.5. The LBID has much greater variance.
0.06
0.04
0.02
0.0

0 1 2 3 4 5 6 7 8 910111213141516171819202122232425262728293031323334353637383940

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 66
Mathematics 669

Variance Functions for LID and LBID Models

40
30
variance

20
10

LID Mode
LBID Model
0

0 10 20 30 40 50 60

time

4.2.5 Simulation of the LBID Process

• The times between arrivals due to immigration will have an


exponential distribution with parameter I (mean 1/I ).

• The times until the next death or birth will have an exponential
distribution with parameter µX = aX(t) where a = a1 + a2 .

• The next event will be a birth with probability a1/a and a death with
probability a2 /a.

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 67
Mathematics 669

• The algorithm for simulation the process can be summarized as


follows:
1. Set X(0) = X0 .
2. If X(ti ) = 0, generate tI from exp(I ) distribution. Set
ti = ti−1 + tI and X(ti ) = 1.
3. Otherwise, generate tI from exp(I ) distribution and tD from
exp(aX(ti )) distribution.
(a) If tI < tD set ti = ti−1 + tI and X(ti ) = X(ti−1 ) + 1.
(b) If tI > tD , generate U = a uniform(0,1) variable.
i. If U < a1 /a set ti = ti−1 + tD and
X(ti ) = X(ti−1 ) + 1.
ii. If U > a1 /a, set ti = ti−1 + tD and
X(ti ) = X(ti−1 ) − 1.
4. Return to Step 3.
Several Realizations of LBID Process
25
20
15
x(t)

10
5
0

0 5 10 15 20 25

time

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 68
Mathematics 669

• Several More Realizations with the Same Parameter Values:


30

14
12
25

10
20
sp

sp

8
15

6
10

4
5

2
0 5 10 15 0 5 10 15 20 25 30

time time

20
15

15
10
sp

sp

10
5

0 5 10 15 20 25 30 0 5 10 15 20 25

time time
15

15
10

10
sp

sp
5

0 5 10 15 20 25 0 5 10 15 20

time time
20
25
20

15
15
sp

sp

10
10

5
5

0 5 10 15 20 25 0 5 10 15 20 25

time time

Chapter 4: Some Linear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 69
Mathematics 669

5 Some Nonlinear One-Population Models

5.1 Nonlinear Birth–Death Models

We now look at population models with nonlinear death rates. Consider


the model with population rates


 a X − b X s+1 for X < (a1 /b1 )1/s
1 1
λX =
 0 otherwise

µX = a2 X + b2 X s+1

We call a1 , a2 the intrinsic rates, and b1 , b2 are the crowding


coefficients that add density dependence to the model. We will look at
the special case with s = 1 which leads to the logistic model.

5.1.1 Deterministic Model

We can write the deterministic model as

Ẋ(t) = aX − bX s+1
where a = a1 − a2 and b = b1 − b2 . This has solution

K
X(t) =
[1 + m exp(−ast)]1/s
with
K = (a/b)1/s and m = (K/K0 )s − 1

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 70
Mathematics 669

5.1.2 Probability Distributions for the Stochastic Model

Assume that u = (a1 /b1 )1/s is an integer. We can obtain the system
of u + 1 Kolmogorov differential equations for the probabilities:

ṗ0 (t) = µ1 p1 (t)


ṗ1 (t) = −(λ1 + µ1 )p1 (t) + µ2 (t)p2 (t)
ṗx (t) = λx−1 px−1 (t) − (λx + µx )px (t) + µx+1 px+1 (t),
for x = 2, . . . , u − 1
ṗu (t) = λu−1 pu−1 (t) − µu pu (t)

• Since there are only a finite number of equations, one can obtain
numerical solutions.

• Since u is finite, a population size of 0 is an absorbing state and


ultimate extinction is certain, i.e., p0 (∞) = 1.

• A process is said to be ecologically stable if the extinction does not


occur within a realizable time interval. A quantity of interest is the
expected time until extinction, Ex .

• The quasi-equilibrium distribution is based on the idea that the


population in equilibrium would not drift. The probabilities would
satisfy the relationship

µx px (t) = λx−1 px−1 (t) for x = 2, . . . , u

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 71
Mathematics 669

5.1.3 Generating Functions and Cumulants

The intensity functions are

f1 = a1 x − b1 xs+1
f−1 = a2 x + b2 xs+1

The PDE for the pgf has the form

∂P ∂2P
= (s − 1)(a1 s − a2 )∂P (s, t)/∂s + s(s − 1)(b1 s + b2 ) 2 .
∂t ∂s

This equation is analytically intractible. By substituting eθ for s we get


the PDE for the mgf, M (θ, t). Letting K log M , we obtain the equation
for the cgf:

∂K
£ θ −θ
¤ ∂K
= (e − 1)a1 + (e − 1)a2 ∂θ
∂t
£ θ ¤ h 2 ¡ ¢2
i
+ (e − 1)(−b1 ) + (e−θ − 1)b2 ∂∂θK 2 +
∂K
∂θ

Again, we can obtain differential equations for the cumulants. For s = 1,


the first cumulant is

κ̇1 (t) = (a − bκ1 )κ1 − bκ2


• The differential equation for the j th cumulant depends on cumulants
up to order j + s. This rules out finding exact solutions.

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 72
Mathematics 669

• One proposed approach is to set all cumulants above a certain order


equal to zero and then solve the resulting finite system.

5.1.4 Application to AHB Population Dynamics

The nonlinear birth-death model with similar mean properties to the


earlier models for AHB population dynamics has parameter values:

a1 = 0.30 a2 = 0.02
b1 = 0.015 b2 = 0.001.

The solution to the deterministic model is

17.5
X(t) = .
1 + 7.75e−0.28t
Deterministic Solution of NLBD Model
15
10
X(t)

0 20 40 60 80 100

Time

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 73
Mathematics 669

• Simulation of the NLBD Model


1. Compute the birth and death rates: b(x) = a1 x − b1 x2 for
x < (a1 /b1 ) and d(x) = a2 x + b2 x2 .
2. Compute the time to the next event as exponential(b(x) + d(x))
random variable.

3. Generate a uniform(0,1) random variable U . If


U < b(x)/(b(x) + d(x)), then the next event is a birth.
Otherwise, it is a death.

• Some realizations of the NLBD model:


Some Realizations of the NLBD Model
25
20
15
X(t)

10
5
0

0 20 40 60 80 100

Time

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 74
Mathematics 669

5.2 Nonlinear Birth-Immigration-Death Models

In addition to the assumptions of nonlinear birth and death rates, we


assume that there is a constant immigration rate I .

5.2.1 Deterministic Model

The deterministic model is

Ẋ(t) = I + aX − bX s+1
where a = a1 − a2 and b = b1 − b2 . This has solution for s = 1:

½ · ¸¾
1 − δe−βt
X(t) = a + β /2b
1 + δe−βt
where
β = (a2 + 4bI)1/2
γ = (2bX0 − a)/β
δ = (1 − γ)/(1 + γ)
The carrying capacity is

K = (a + β)/2b

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 75
Mathematics 669

5.2.2 Simulation of the Stochastic Model

• The analysis of the stochastic model can be carried out by


numerically solving the differential equations for the cumulants.

• However, the simulation of this model is still quite simple.


• Replace the birth rate in the simulation procedure for the NLBD
model with b(x) = I + a1 x − b1 x2 . Steps 2 and 3 are the same
as before. The one care that needs to be taken is to check whether
the value of X(t) is above the carrying capacity.

5.2.3 Application to AHB Population Dynamics

The parameter values for the NLBID model keeping the same carrying
capacity as before are:

a1 = 0.30 a2 = 0.02
b1 = 0.012 b2 = 0.004816.
I = 0.25.

The solution to the deterministic model with X(0) = 2 is

· ¸
1 − δe−βt
X(t) = 8.3254 + 0.1749
1 + δe−βt
where δ = 5.4364 and β = 0.308571.

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 76
Mathematics 669

5.2.4 Summary of Single Population Models

Properties Model

LBID NBD NBID

Deterministic easy easy difficult

solution

Stochastic exact exact numerical approx. numerical

model distribution solutions solutions

exact easy cumulant easy cumulant approx

cumulants approximations accurate for low I

true equilibrium true equilibrium

dist. does not exist dist. exists

Advantages widely used also widely used includes subtle

mechanistic basis density-dependent but important

growth immigration effect

Limitation for initial no immigration challenging estimation

period only for immigration

Chapter 5: Some Nonlinear One-Population Models Copyright °


c 2005 by Thomas E. Wehrly Slide 77
Mathematics 669

6 Models for Multiple Populations

6.1 Compartmental Models

Compartmental models are widely used in the modeling of drug flow. We


will start by describing a deterministic model for the flow between various
compartments. Define the following quantities:

• Xi (t) = amount of substance in compartment i at time t


• fij (t) = flow rate of substance from j to i at time t. Compartment
0 refers to the system exterior.

• kij (t) = fij (t)/Xj (t) = proportional turnover rate from j to i at


time t.

• Ii (t) = fi0 (t)/Xj (t) = flow rate to i from the exterior


• µj (t) = f0j (t)/Xj (t) = turnover rate from j to the exterior

Ii Ij

kji

Compartment i Compartment j
kij

µi µj

Figure 1: A General Compartmental Model

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 78
Mathematics 669

6.1.1 The Deterministic Compartment Model

We will assume for now that all the flow rates are constants. Then the
deterministic model follows the system of differential equations

Ẋ1 (t) = −(µ1 + k21 + · · · + kn1 )X1 + k12 X2 + · · · + k1n Xn + I1


..
.

Ẋn (t) = kn1 X1 + · · · + kn,n−1 Xn−1


− (µn + k1n + · · · + kn−1,n )Xn + In

Define the following matrices:

   
Ẋ1 (t) X1 (t)
   
 ..   .. 
Ẋ(t) =  .  , X(t) =  . 
   
Ẋn (t) Xn (t)

   
k11 ··· k1n I
   1 
 .. ..   . 
K= . .  , I =  .. 
   
kn1 ··· knn In

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 79
Mathematics 669

The deterministic model can be written as

Ẋ(t) = KX(t) + I

The formal solution is

Z
X(t) = exp(Kt)X(0) + exp[K(t − s)]Ids

6.1.2 Stochastic Compartmental Models

Let

1. Pij (t), i, j = 1, . . . , n; denote the probability that a random


animal starting in i at some arbitrary time, say t = 0, will be in j
after elapsed time t,

2. Xij (t) be the random number of animals starting in i at t = 0 that


are in j at time t,

3. P (t) = [Pij (t)] and X(t) = [Xij (t)] be matrices of


probabilities and counts, respectively,

4. E[X(t)] be the matrix of expected values of X .


5. kij , for i = 1, . . . , n, j = 0, . . . , n, i 6= j , be a probability
intensity coefficient defined by

Prob{a given animal in i transfers to j in (t, t+∆t)|X(t)} = kij ∆t+o(∆t)

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 80
Mathematics 669

where 0 represents the system exterior,


P
6. kii = − j6=i kij be the total outflow coefficient,
7. K = (kij ) be the n × n coefficient matrix, and
8. λ1 , . . . , λn be the eigenvalues of K .
Results I.

1. P (t) = exp(Kt)
2. If the λ’s are distinct and real, the Pij (t) elements have form
X
Pij (t) = Aij` exp(λ` t) for i, j = 1, . . . , n
`

where the Aij` are constants.

3. E[X(t)] = X(0)P (t)


where X(0) is a diagonal matrix of initial counts.

Result II.

If the λ’s are distinct and complex, the Pij (t) have damped oscillations
and may be written as
X X
Pij (t) = Aij` exp(λ` t)+ [Bij` sin(θij` t)+Dij` cos(θij` t)] exp(λ` t)
` `

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 81
Mathematics 669

6.2 Basic Methods for Two-Population Models

Let Xi (t), i = 1, 2 be the random size of population i at time t. Our


goal is to make certain simple assumptions about the population and
then find the joint distribution of

X(t) = [X1 (t), X2 (t)]0

We want to obtain the joint pmf of X as

px1 ,x2 = P [X1 (t) = x1 , X2 (t) = x2 ].

6.2.1 A Birth-Immigration-Death-Migration Model

We will assume the populations can change according to the following


probabilities:

1. P [Xi will increase by 1 due to immigration] = Ii ∆t,


2. P [Xi will increase by 1 due to birth] = λi Xi ∆t,
3. P [Xi will decrease by 1 due to death] = µi Xi ∆t,
4. P [Xi will increase by 1 and Xj will decrease by 1
due to migration] = kij Xj ∆t for i 6= j ,

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 82
Mathematics 669

How do we solve for the distribution of (X1 (t), X2 (t))?

That is, we wish to find the pmf of (X1 (t), X2 (t)):

px1 ,x2 (t) = P [X1 (t) = x1 , X2 (t) = x2 ]

Our approach will be similar to that for single-population models.

• Form the Kolmogorov equations for px1 ,x2 (t).

• Obtain the PDEs for the bivariate pgf:

X
P (s1 , s2 , t) = sx1 1 sx2 2 px1 ,x2 (t).
x1 ,x2

• Obtain differential equations for the joint cumulant functions.

• Obtain exact or numerical solutions for the cumulant functions.

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 83
Mathematics 669

6.3 Simulation of Predator-Prey Model

We earlier studied a deterministic predator-prey model where

• X1 (t) = the number of prey at time t


• X2 (t) = the number of predators at time t

Their relationship is driven by the system of differential equations:

Ẋ1 = X1 (r1 − b1 X2 )
Ẋ2 = X2 (−r2 + b2 X1 )

We make the following assumptions to obtain the analogous stochastic


model:

P [X1 (t + ∆t) = x1 + 1|X1 (t) = x1 , X2 (t) = x2 ] = r1 x1 ∆t


P [X1 (t + ∆t) = x1 − 1|X1 (t) = x1 , X2 (t) = x2 ] = b1 x1 x2 ∆t
P [X2 (t + ∆t) = x1 + 1|X1 (t) = x1 , X2 (t) = x2 ] = b2 x1 x2 ∆t
P [X2 (t + ∆t) = x1 − 1|X1 (t) = x1 , X2 (t) = x2 ] = r2 x2 ∆t

This results in a Markov process with birth and death rates for the two
populations given by the terms that are multiplied by ∆t.

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 84
Mathematics 669

We use the following algorithm to generate a realization of the


predator-prey process:

• Compute the birth and death rates:

B1 (x1 , x2 ) = r1 x1
D1 (x1 , x2 ) = b1 x 1 x 2
B2 (x1 , x2 ) = b2 x 1 x 2
D2 (x1 , x2 ) = r2 x2

• Compute the intensity until the next event:


R = B1 + D1 + B2 + D2

• Generate the time until the next event:

T ∗ = −ln(U1 )/R

• Decide which event occurs by generating U2


– If U2 < B1 /R, then X1 = x1 + 1
– If B1 /R < U2 < (B1 + D1 )/R, then X1 = x1 − 1
– If (B1 + D1 )/R < U2 < (B1 + D1 + B2 )/R, then
X2 = x2 + 1
– Otherwise, X2 = x2 + 1.

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 85
Mathematics 669

6.4 Simulation of a Competition Model

We earlier studied a deterministic competition model where

• X1 (t) = the number of individuals of species 1 at time t


• X2 (t) = the number of individuals of species 2 at time t

Their relationship is driven by the system of differential equations:

Ẋ1 = X1 (r1 − s11 X1 − s12 X2 )


Ẋ2 = X2 (r2 − s21 X1 − s22 X2 )

We make the following assumptions to obtain the analogous stochastic


model:

P [X1 (t + ∆t) = x1 + 1|X1 (t) = x1 , X2 (t) = x2 ] = r1 x1 ∆t


P [X1 (t + ∆t) = x1 − 1|X1 (t) = x1 , X2 (t) = x2 ] = x1 (s11 x1 + s12 x2 )∆t
P [X2 (t + ∆t) = x1 + 1|X1 (t) = x1 , X2 (t) = x2 ] = r2 x2 ∆t
P [X2 (t + ∆t) = x1 − 1|X1 (t) = x1 , X2 (t) = x2 ] = x2 (s21 x1 + s22 x2 )∆t

This results in a Markov process with birth and death rates for the two
populations given by the terms that are multiplied by ∆t.

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 86
Mathematics 669

We use the following algorithm to generate a realization of the


two-species competition process:

• Compute the birth and death rates:

B1 (x1 , x2 ) = r1 x1
D1 (x1 , x2 ) = x1 (s11 x1 + s12 x2 )
B2 (x1 , x2 ) = r2 x2
D2 (x1 , x2 ) = x2 (s21 x1 + s22 x2 )

• Compute the intensity until the next event:


R = B1 + D1 + B2 + D2

• Generate the time until the next event:

T ∗ = −ln(U1 )/R

• Decide which event occurs by generating U2


– If U2 < B1 /R, then X1 = x1 + 1
– If B1 /R < U2 < (B1 + D1 )/R, then X1 = x1 − 1
– If (B1 + D1 )/R < U2 < (B1 + D1 + B2 )/R, then
X2 = x2 + 1
– Otherwise, X2 = x2 + 1.

Chapter 6: Models for Multiple Populations Copyright °


c 2005 by Thomas E. Wehrly Slide 87

You might also like