Stochpopmodels
Stochpopmodels
Models
Thomas E. Wehrly
Department of Statistics
Texas A&M University
0-0
Mathematics 669
Contents
1 Probability and Random Variables 1
1.1 The Basic Ideas of Probability . . . . . . . . . . . . . . . 1
1.1.1 Sample Spaces and Events . . . . . . . . . . . . . 1
1.1.2 Conditional Probability . . . . . . . . . . . . . . . 4
1.1.3 Independence . . . . . . . . . . . . . . . . . . . 6
1.2 Random Variables . . . . . . . . . . . . . . . . . . . . . 6
1.3 Probability Distributions of a Discrete R.V. . . . . . . . . . 8
1.3.1 Parameters of Probability Distributions . . . . . . . . 9
1.3.2 Expected Values of Discrete RV . . . . . . . . . . . 10
1.4 Continuous Random Variables . . . . . . . . . . . . . . . 12
1.4.1 Percentiles . . . . . . . . . . . . . . . . . . . . . 14
1.4.2 Expected Values, Mean and Variance . . . . . . . . 14
1.5 Joint Probability Distributions . . . . . . . . . . . . . . . . 15
1.5.1 Conditional Distributions . . . . . . . . . . . . . . 16
1.5.2 Conditional Distributions for Bivariate Continuous RVs 17
1.6 Some Special Cases . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Poisson Distribution . . . . . . . . . . . . . . . . . 19
1.6.2 The Poisson Process . . . . . . . . . . . . . . . . 20
1.6.3 Normal Distribution . . . . . . . . . . . . . . . . . 24
1.7 Gamma Distribution . . . . . . . . . . . . . . . . . . . . 26
1.7.1 Distribution of Elapsed Time in the Poisson Process . 29
The models that you have seen thus far are deterministic models. For
any time t, there is a unique solution X(t). On the other hand,
stochastic models result in a distribution of possible values X(t) at a
time t. To understand the properties of stochastic models, we need to
use the language of probability and random variables.
A random experiment is carried out a large number (n) of times and the
number (n(A)) of times that event A occurs is recorded. Then the
proportion of times that A occurs will tend to the probability of A:
n(A)
−→ P (A)
n
n(A)
n n(A) n
10 2 0.20000
100 23 0.23000
0.18
0.14
Axioms:
∞
X
P (A1 ∪ A2 ∪ · · ·) = P (Ai )
i=1
Properties:
• 0 ≤ P (A) ≤ 1
• P (∅) = 0
• P (A ∪ B) = P (A) + P (B) − P (A ∩ B)
For any two events A and B with P (B) > 0 the conditional probability
of A given that B has occurred:
P (A ∩ B)
P (A|B) =
P (B)
P (A ∩ B) = P (A|B)P (B)
P (A ∩ B) = P (B|A)P (A)
A1 ∪ A2 ∪ · · · ∪ An = S.
Assume also that P (Aj ) > 0 for each j . Then for any event B ,
n
X
P (B) = P (B|Aj )P (Aj )
j=1
P (B|Ak )P (Ak )
P (Ak |B) = Pn
j=1 P (B|Aj )P (Aj )
Solution:
P (B|A)P (A)
P (A|B) = P (B|A)P (A)+P (B|A0 )P (A0 )
(0.95)(0.001)
= (0.95)(0.001)+(1−0.90)(1−0.001)
= 0.0094
0 0 P (B 0 |A0 )P (A0 )
P (A |B ) = P (B 0 |A0 )P (A0 )+P (B 0 |A)P (A)
(0.90)(0.999)
= (0.90)(0.999)+(1−0.95)(0.001)
= 0.9999444
1.1.3 Independence
P (A ∩ B) = P (A)P (B).
P [X ∈ A] = P [X −1 (A)].
p(x) = P (X = x)
Ax = {s ∈ S : X(s) = x}.
x x1 x2 x3 ... xk
p(x) p(x1 ) p(x2 ) p(x3 ) ... p(xk )
• p(x) ≥ 0 all x
P
• all xi p(xi ) = 1
{p(x; α) : α ∈ A}
• Mean of a discrete RV
The mean of a rv X is
X
E[X] = µ = x · p(x)
x∈D
X
E[h(X)] = µh(X) = h(x) · p(x)
x∈D
E(aX + b) = aE(X) + b
V (X) = σ2
2
= σX
= E[(X − µ)2 ]
p √
σ = σX = V (X) = σ 2 = SD(X)
V (aX + b) = a2 V (X) = a2 σ 2
• Implications:
– V (aX) = a2 V (X)
– SD(aX) = |a|SD(X)
– V (X + b) = V (X)
– SD(X + b) = SD(X)
Example of a PDF
0.20
0.15
0.10
pdf
0.05
0.00
1.0
0.8
0.15
0.6
0.10
pdf
cdf
0.4
0.05
0.2
0.00
0.0
0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5
x x
Useful Properties:
• P (a ≤ X ≤ b) = F (b) − F (a)
• If X is a continuous RV with pdf f (x) and cdf F (x), then at every
x at which F 0 (x) exists:
F 0 (x) = f (x)
1.4.1 Percentiles
2 2
R∞
Variance: E[(X − µ) ] = σ = −∞
(x − µ)2 · f (x)dx
F (x, y) = P [X ≤ x, Y ≤ y].
X
P [(X, Y ) ∈ A] = p(x, y)
(x,y)∈A
We say that (X, Y ) are jointly continuous rvs if there exists a function
called the joint pdf such that
ZZ
P [(X, Y ) ∈ A] = f (x, y)dxdy
A
Z ∞Z ∞
h(x, y)f (x, y)dxdy if X, Y continuous
X−∞X−∞
E[h(X, Y )] =
h(x, y)p(x, y) if X, Y discrete
x y
P [X ∈ A, Y ∈ B] = P [X ∈ A] × P [Y ∈ B]
fXY (x0 , y)
fY |x0 (y) = , for fX (x0 ) > 0.
fX (x0 )
fXY (x, y0 )
fX|y0 (x) = , for fY (y0 ) > 0.
fY (y0 )
P
• E(Y |x) = µY |x = y yfY |x (y)
P
• V (Y |x) = σY2 |x = y (y − µ)2 fY |x (y)
Given that (X, Y ) are continuous rvs with pdf fXY (x, y), the
conditional pdf of Y given that X = x0 is
fXY (x0 , y)
fy|x0 (y) = for fX (x0 ) > 0.
fX (x0 )
Properties:
• fy|x0 (y) ≥ 0
R∞
• −∞ fy|x0 (y)dy = 1
R
• P (Y ∈ B|X = x0 ) = B
fy|x0 (y)dy = 1
R∞
• E(Y |X = x0 ) = µY |x0 = −∞
yfy|x0 (y)dy
R∞
• V (Y |X = x0 ) = σY2 |x0 = −∞
(y − µY |x0 )2 fy|x0 (y)dy
Remark: The conditional mean E(Y |X = x0 ) is known as the
regression function of Y on x.
X ∼ Poisson(λ)
where λ is the rate per unit time or rate per unit area.
e−λ λx
p(x; λ) = P (X = x) = , x = 0, 1, 2, . . . , λ > 0
x!
The mean and variance of a Poisson random variable are
E[X] = µ =λ
V [X] = σ2 =λ
e−λt (λt)x
Px (t) = P (X(t) = x) = , x = 0, 1, 2, . . .
x!
The process {X(t), t ≥ 0} is called a Poisson process.
Outline of Proof
Consider
Then
P0 (t + h) − P0 (t) o(h)
= −λP0 (t) + .
h h
Let h → 0 and obtain P00 (t) = −λP0 (t). This implies that
P0 (t) = eλt .
For x ≥ 1,
e−λt (λt)x
Px (t) = , x = 1, 2, . . . .
x!
Figures: The first figure on the next page illustrates the variation in a
Poisson process with λ = 1 for various times. The red bars represent
the pmf of the Poisson process at t = 1, 2, . . . , 10.
Poisson Process
20
15
10
y
5
0
0 2 4 6 8 10
50
0
0 20 40 60 80 100
time
1 2 2
f (x; µ, σ) = √ e−(x−µ) /2σ −∞<x<∞
2π σ
E(X) = µ
V (X) = σ2
X ∼ N (µ, σ 2 )
mu=-2
mu=0
0.8
mu=2
0.6
density
0.4
0.2
0.0
-4 -2 0 2 4
sigma=1
sigma=0.5
0.8
sigma=2
0.6
y
0.4
0.2
0.0
-4 -2 0 2 4
Using the properties of the gamma function, we obtain the pdf of the
gamma (α, β) distribution:
1 α−1 −x/β
f (x; α, β) = x e x ≥ 0, α > 0, β > 0
β α Γ(α)
E(X) = αβ
V (X) = αβ 2
alpha=1
alpha=2
alpha=3
alpha=4
0.8
alpha=5
alpha=.5
0.6
f(x)
0.4
0.2
0.0
0 2 4 6 8 10
beta=1
beta=2
beta=4
beta=8
0.8
beta=.5
0.6
f(x)
0.4
0.2
0.0
0 2 4 6 8 10
Sn ≤ t ⇔ X(t) ≥ n.
Hence,
∞
X j
−λt (λt)
P [Sn ≤ t] = P [X(t) ≥ n] = e .
j=n
j!
λn
f (t) = tn−1 e−λt , t > 0.
(n − 1)!
References
dX(t)
= (a1 − a2 )X(t),
dt
with solution
p0 (t) = α(t)
px (t) = [1 − α(t)][1 − β(t)][β(t)]x−1 , x = 1, 2, . . .
where
a2 (e(a1 −a2 )t −1)
α(t) = a1 e(a1 −a2 )t −a2
a1 (e(a1 −a2 )t −1)
β(t) = a1 e(a1 −a2 )t −a2
Standard results for the geometric distribution yield the mean and
variance functions for the population size:
• The mean function agrees with the deterministic model, however the
variance function depends on the magnitudes of the birth and death
rates as well as on their difference.
p0 (t) = α(t)X0
• Simulation with a1 = 5, a2 = 1, X0 = 10
70
70
60
60
50
50
40
sp
sp
40
30
30
20
20
10
10
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5
time time
80
80
60
60
sp
sp
40
40
20
20
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5
time time
80
70
60
60
50
sp
sp
40
40
30
20
20
10
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4
time time
80
60
50
60
40
sp
sp
40
30
20
20
10
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6
time time
• Simulation with a1 = 1, a2 = 5, X0 = 10
10
10
8
8
6
6
sp
sp
4
4
2
2
0
0
0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5 2.0
time time
10
10
8
8
6
6
sp
sp
4
4
2
2
0
0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0
time time
10
10
8
8
6
6
sp
sp
4
4
2
2
0
0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0
time time
10
10
8
8
6
6
sp
sp
4
4
2
2
0
time time
• Simulation with a1 = 5, a2 = 5, X0 = 10
12
20
10
8
15
sp
sp
6
4
10
2
5
0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.5 1.0 1.5
time time
15
15
10
sp
sp
10
5
0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2
time time
20
10
18
8
16
6
sp
sp
14
4
12
2
10
0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 1.2
time time
24
22
20
20
18
15
sp
sp
16
14
10
12
10
5
time time
• Simulation with a1 = 5, a2 = 4, X0 = 5
6
25
5
20
4
15
sp
sp
3
2
10
1
5
0
0.0 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5
time time
30
6
25
20
4
sp
sp
15
2
10
0
time time
30
20
25
15
20
sp
sp
10
15
10
5
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8
time time
6
6
4
4
sp
sp
2
2
0
time time
Basic notation:
Goal: Solve for p(t) for any t > 0 based on simple assumptions
concerning X(t).
Once we know p(t), we (in theory) know all the properties of X(t).
We will show later that this results in X(t) being a Poisson random
variable with parameter
Ẋ(t) = I − aX(t).
∞
X
µ1 (t) = E[X(t)] = xpx (t)
x=0
∞
X
µi (t) = E[(X(t))i ] = xi px (t)
x=0
e−λ λx
p(x; λ) = P (X = x) = , x = 0, 1, 2, . . . , λ > 0
x!
P∞ λx e −λ P∞ e λ −λ x
E[X] = x=0 x
x! = x=1 x x!
P∞ x e−λ λλx−1 P∞ e−λ λx−1
= x=1 x (x−1)! = λ x=1 (x−1)!
P∞ e−λ λy
= λ y=0 y! = λ
1. Set X(0) = 0.
2. Generate t1 from an exp(I) distribution. Set X(t1 ) = 1.
3. If X(ti ) = 0, generate tI from an exp(I) distribution. Set
ti = ti−1 + tI and X(ti ) = 1.
4. Otherwise, generate tI from an exp(I) distribution and tD from
exp(aX(ti )) distribution.
– If tI < tD set ti = ti−1 + tI and X(ti ) = X(ti−1 ) + 1.
– If tI > tD set ti = ti−1 + tD and X(ti ) = X(ti−1 ) − 1.
5. Return to Step 3.
60
40
20
0
0 10 20 30 40 50
time
Suppose that X(t + ∆t) = x. There are the following possibilities for
the way this could occur starting at time t:
ṗx (t) = Ipx−1 (t) − (I + ax)px (t) + a(x + 1)px+1 (t) for x > 0
and
ṗ0 (t) = −Ip0 (t) + ap1 (t)
ṗ(t) = p(t)R,
where R is a tridiagonal matrix. For our immigration-death model, the R
matrix is infinite with elements for i, j ≥0
ri,i+1 = I
r
i,i−1 = ai
ri,j =
ri,i = −(ai + I)
r =0
i,j for |i − j| > 1.
Generating functions are useful tools for finding the population size
distribution and moments of this distribution.
∞
X
P (s, t) = sx px (t).
x=0
∞
X
M (θ, t) = eθx px (t).
x=0
Thus, one can find the ith moment of X(t) by differentiating the
mgf with respect to θ .
The pmf is
e−λ(t) λ(t)x
px (t) = , x = 0, 1, 2, . . .
x!
• The pgf is
∞
X
−λ(t) x X∞
x e λ(t) −λ(t) [sλ(t)]x
P (s, t) = s =e = e(s−1)λ(t)
x=0
x! x=0
x!
• The mgf is
θ
M (θ, t) = P (eθ , t) = e(e −1)λ(t)
• The cgf is
For many models it is more practical to form a system of PDEs for the
generating functions rather than for the probabilities. We multiply the
expression for ṗx (t) by sx and sum over x:
X X X X
x x x
s ṗx = I s px−1 − (I +ax)s px +a (x+1)sx px+1
The left hand side is ∂P (s, t)/∂t. Using the additional result that
∂P (s, t) X x−1
= xs px (t)
∂s
we obtain
∂P (s, t)
= I(s − 1)P (s, t) + a(1 − s)∂P (s, t)/∂s
∂t
For our immigration death model, the possible changes (or intensity
functions) are
f1 = I and f−1 = ax
Bailey provides the following operator equations for the pgf and mgf:
∂P
P j ∂
¡ ¢
∂t = j6=0 (s − 1) fj s ∂s P (s, t)
∂M
P jθ
¡∂¢
∂t = j6=0 (e − 1) fj ∂θ M (θ, t)
For our immigration death model, the possible changes (or intensity
functions) are
f1 = I and f−1 = ax
Thus,
¡ ∂¢ 0
f1 s ∂s P (s, t) = Is0 ∂∂sP0 = IP (s, t)
¡ ∂¢
f−1 s ∂s P (s, t) = as ∂P
∂s
Hence,
∂P (s, t) ∂P (s, t)
= I(s − 1)P (s, t) + (s−1 − 1)as
∂t ∂s
Also,
¡ ∂
¢ 0
f1 ∂θ M (θ, t) = I ∂∂θM
0 = IM (θ, t)
¡∂¢
f−1 ∂θ M (θ, t) = a ∂M
∂θ
We end up with
∂M ∂M
= I(eθ − 1)M + a(e−θ − 1) .
∂t ∂θ
With the boundary condition M (θ, 0) = 1, the solution is
We can often find simpler PDEs for the cgf and use this to find ODEs for
the cumulants:
∂K ∂K
= I(eθ − 1) + a(e−θ − 1) .
∂t ∂θ
2. Use the operator equations to obtain the PDEs for the moment
generating function.
We consider models for a population of size X(t) with linear death rate
µX = aX
Ẋ(t) = −aX + I.
I = 1.4 colonies/time
a = 0.08 (time−1 )
X(0) = 2 colonies
The deterministic solution is
10
5
0
0 10 20 30 40 50 60
time
∂P (s, t) ∂P (s, t)
= I(s − 1)P (s, t) + a(1 − s)
∂t ∂s
The pgf (or mgf) can be used to determine various properties of the
probability distribution of X(t).
where X1 (t) and X2 (t) are independent random variables with the
above binomial and Poisson distributions, respectively.
M (θ, t) = P (eθ , t)
I = 1.4 colonies/time
a = 0.08 (time−1 )
X(0) = 2 colonies
The deterministic solution is
5 6 7 8 9 101112131415161718192021222324252627282930
10
5
0
0 20 40 60
time
30
20
25
15
20
sp
sp
15
10
10
5
5
0 20 40 60 80 0 20 40 60 80
time time
25
20
20
15
15
sp
sp
10
10
5
5
0 20 40 60 80 0 20 40 60 80
time time
20
20
15
15
sp
sp
10
10
5
5
0 20 40 60 80 0 20 40 60
time time
25
20
20
15
15
sp
sp
10
10
5
5
0 20 40 60 0 20 40 60
time time
We now consider a process that has a linear birth rate in addition to the
linear death rate:
λX = a1 X and µX = a2 X
The solution is
ṗx (t) = [I+a1 (x−1)]px−1 (t)−[I+(a1 +a2 )x]px (t)+a2 (x+1)px+1 (t)
ri,i+1 = I + ia1
r
i,i−1 = ia2
ri,j =
ri,i = −I − i(a1 + a2 )
r =0
i,j for |i − j| > 1.
π1 = π0 (I/a1 )
π2 = π0 I(I + a1 )/2a22
.. ..
. .
¡ ¢
πi = π0 (a1 /a2 )i i−1+(I/a
i
1)
If a1 < a2 , we can solve for π0 by summing and setting the sum equal
to 1:
π0 = (−a/a2 )i/a1
We find that the distribution of X ∗ is the negative binomial distribution
with pmf
µ ¶
k−1+i k
πi = p (1 − p)i
i
where k = I/a1 and p = −a/a2 .
f1 = I + a1 x and f−1 = a2 x.
∂P
= I(s − 1)P (s, t) + [a1 s(1 − s) + a2 (1 − s)∂P (s, t)/∂s.
∂t
The analytical solution is
p0 (t) = P (0, t) = aI/a1 (a2 eat − a2 )X0 (a1 eat − a2 )−X0 −I/a1 .
∂K θ
© −θ −θ
ª ∂K
= I(e − 1) + a1 (e − 1) + a2 (e − 1) .
∂t ∂θ
Using a series expansion, we obtain ODEs for the first three cumulants:
I = 1.4 a1 = 0.08
X(0) = 2 a2 = 0.16
Since the negative net growth rate is a = −0.08, the solution to the
deterministic model is the same as the linear death-immigration process
with the same death rate.
0 1 2 3 4 5 6 7 8 910111213141516171819202122232425262728293031323334353637383940
40
30
variance
20
10
LID Mode
LBID Model
0
0 10 20 30 40 50 60
time
• The times until the next death or birth will have an exponential
distribution with parameter µX = aX(t) where a = a1 + a2 .
• The next event will be a birth with probability a1/a and a death with
probability a2 /a.
10
5
0
0 5 10 15 20 25
time
14
12
25
10
20
sp
sp
8
15
6
10
4
5
2
0 5 10 15 0 5 10 15 20 25 30
time time
20
15
15
10
sp
sp
10
5
0 5 10 15 20 25 30 0 5 10 15 20 25
time time
15
15
10
10
sp
sp
5
0 5 10 15 20 25 0 5 10 15 20
time time
20
25
20
15
15
sp
sp
10
10
5
5
0 5 10 15 20 25 0 5 10 15 20 25
time time
a X − b X s+1 for X < (a1 /b1 )1/s
1 1
λX =
0 otherwise
µX = a2 X + b2 X s+1
Ẋ(t) = aX − bX s+1
where a = a1 − a2 and b = b1 − b2 . This has solution
K
X(t) =
[1 + m exp(−ast)]1/s
with
K = (a/b)1/s and m = (K/K0 )s − 1
Assume that u = (a1 /b1 )1/s is an integer. We can obtain the system
of u + 1 Kolmogorov differential equations for the probabilities:
• Since there are only a finite number of equations, one can obtain
numerical solutions.
f1 = a1 x − b1 xs+1
f−1 = a2 x + b2 xs+1
∂P ∂2P
= (s − 1)(a1 s − a2 )∂P (s, t)/∂s + s(s − 1)(b1 s + b2 ) 2 .
∂t ∂s
∂K
£ θ −θ
¤ ∂K
= (e − 1)a1 + (e − 1)a2 ∂θ
∂t
£ θ ¤ h 2 ¡ ¢2
i
+ (e − 1)(−b1 ) + (e−θ − 1)b2 ∂∂θK 2 +
∂K
∂θ
a1 = 0.30 a2 = 0.02
b1 = 0.015 b2 = 0.001.
17.5
X(t) = .
1 + 7.75e−0.28t
Deterministic Solution of NLBD Model
15
10
X(t)
0 20 40 60 80 100
Time
10
5
0
0 20 40 60 80 100
Time
Ẋ(t) = I + aX − bX s+1
where a = a1 − a2 and b = b1 − b2 . This has solution for s = 1:
½ · ¸¾
1 − δe−βt
X(t) = a + β /2b
1 + δe−βt
where
β = (a2 + 4bI)1/2
γ = (2bX0 − a)/β
δ = (1 − γ)/(1 + γ)
The carrying capacity is
K = (a + β)/2b
The parameter values for the NLBID model keeping the same carrying
capacity as before are:
a1 = 0.30 a2 = 0.02
b1 = 0.012 b2 = 0.004816.
I = 0.25.
· ¸
1 − δe−βt
X(t) = 8.3254 + 0.1749
1 + δe−βt
where δ = 5.4364 and β = 0.308571.
Properties Model
solution
Ii Ij
kji
Compartment i Compartment j
kij
µi µj
We will assume for now that all the flow rates are constants. Then the
deterministic model follows the system of differential equations
Ẋ1 (t) X1 (t)
.. ..
Ẋ(t) = . , X(t) = .
Ẋn (t) Xn (t)
k11 ··· k1n I
1
.. .. .
K= . . , I = ..
kn1 ··· knn In
Ẋ(t) = KX(t) + I
Z
X(t) = exp(Kt)X(0) + exp[K(t − s)]Ids
Let
1. P (t) = exp(Kt)
2. If the λ’s are distinct and real, the Pij (t) elements have form
X
Pij (t) = Aij` exp(λ` t) for i, j = 1, . . . , n
`
Result II.
If the λ’s are distinct and complex, the Pij (t) have damped oscillations
and may be written as
X X
Pij (t) = Aij` exp(λ` t)+ [Bij` sin(θij` t)+Dij` cos(θij` t)] exp(λ` t)
` `
X
P (s1 , s2 , t) = sx1 1 sx2 2 px1 ,x2 (t).
x1 ,x2
Ẋ1 = X1 (r1 − b1 X2 )
Ẋ2 = X2 (−r2 + b2 X1 )
This results in a Markov process with birth and death rates for the two
populations given by the terms that are multiplied by ∆t.
B1 (x1 , x2 ) = r1 x1
D1 (x1 , x2 ) = b1 x 1 x 2
B2 (x1 , x2 ) = b2 x 1 x 2
D2 (x1 , x2 ) = r2 x2
T ∗ = −ln(U1 )/R
This results in a Markov process with birth and death rates for the two
populations given by the terms that are multiplied by ∆t.
B1 (x1 , x2 ) = r1 x1
D1 (x1 , x2 ) = x1 (s11 x1 + s12 x2 )
B2 (x1 , x2 ) = r2 x2
D2 (x1 , x2 ) = x2 (s21 x1 + s22 x2 )
T ∗ = −ln(U1 )/R