Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
154 views85 pages

Mathematical Biology

This document outlines population dynamics models in mathematical biology. It begins by introducing single species continuous time models, including the Malthusian and logistic equations. These models describe how a population changes over time as a function of its current size. The logistic equation incorporates density dependence, limiting unchecked growth. The document then expands to interacting species models and discrete time models, addressing topics like phase plane analysis, bifurcations, and stability.

Uploaded by

Abhishek Gambhir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views85 pages

Mathematical Biology

This document outlines population dynamics models in mathematical biology. It begins by introducing single species continuous time models, including the Malthusian and logistic equations. These models describe how a population changes over time as a function of its current size. The logistic equation incorporates density dependence, limiting unchecked growth. The document then expands to interacting species models and discrete time models, addressing topics like phase plane analysis, bifurcations, and stability.

Uploaded by

Abhishek Gambhir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

MT4552

Population dynamics models in


mathematical biology

Lecturers: J. Kursawe, M.A.J. Chaplain

2021

These lecture notes are based on a earlier versions by M.A.J. Chaplain, C. Venkataraman and T.
Lorenzi
Contents

1 Single species, continuous time models 1


1.1 Steady states and local stability . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Nondimensionalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Interacting species, continuous time models 15


2.1 Phase plane analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Linearisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Lotka-Volterra, Predator-Prey equations . . . . . . . . . . . . . . . . . . . . . 20
2.4 Competition models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5 Hopf bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Delay differential equations 29


3.1 No oscillations without delay . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Linear stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Biochemical kinetics 37
4.1 Law of mass action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Brusselator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Enzyme reaction (Michaelis-Menten) . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 Equilibrium approximation . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.2 Quasi-Steady-State approximation . . . . . . . . . . . . . . . . . . . . 44
4.3.3 Comparison of the two approaches . . . . . . . . . . . . . . . . . . . . 45
4.3.4 Fast and slow timescales for the quasi-steady-state approximation . . . 46

5 Single species, discrete time models 47


5.1 Cobweb maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Steady states and local stability . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.3 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6 Interacting species, discrete time models 66


6.1 Systems of linear difference equations . . . . . . . . . . . . . . . . . . . . . . 66
6.2 Systems of nonlinear difference equations . . . . . . . . . . . . . . . . . . . . 71
6.3 Models for interacting species . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7 Delay difference equations 76
7.1 Linear stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

A Background theory 80

Bibliography 82
Chapter 1

Single species, continuous time models

Ordinary differential equations (ODEs) describe how the rate of change of a variable depends
on its current state, i.e., a (1st order) ODE takes the form
dN (t)
= f (N (t))
| dt
{z } | {z }
function of N at time t
rate of change of N at time t

1.0.1 Example (Malthusian equation). The Malthusian equation is a single species model of the
form
dN (t)
= N (t), (1.0.1)
dt
where the represents the net per capita growth rate. Equation (1.0.1) is a linear 1st order ODE.
Given an initial population size N0 > 0, (N (0) = N0 ), then the solution of (1.0.1) is given by

N (t) = N0 e t .

Equation (1.0.1) is called the Malthusian equation in continuous time and the population whose
size is given by N is said to exhibit Malthusian growth.
Figure 1.1 shows the growth of an initial population of size N0 under the Malthusian ODE
(1.0.1). For > 0 we observe exponential growth of the population, this may be a reasonable
model for growth of a population with unlimited resources, such as bacteria say in a nutrient
rich environment with lots of space. In scenarios where there is a finite amount of resources for
the population to exploit, this model that allows unbounded exponential growth is unrealistic.
In such a setting it is worthwhile considering modifying the model to account for “saturation”
or intra-species competition. This can be achieved by adding density dependent effects, i.e.,
considering an ODE of the form
dN (t)
= f (N (t)) = N (t)F (N (t)),
dt
where f () is known as the net growth rate and F () is known as the net per capita growth rate.
When density dependent effects are included the net per capita growth rate F is not constant,
but instead depends on the size of the population N .

1
N (t)
N (t) N

N0 N0 N0

t t t
(time)
>0 =0 <0

Figure 1.1: Evolution of the population size under the Malthusian ODE (1.0.1) for different net
growth rates.

1.0.2 Example (Logistic equation). Consider the logistic equation an ODE of the form
✓ ◆
dN (t) N (t)
= N (t)F (N (t)) = N (t) 1 with , K > 0, (1.0.2)
dt K
the parameters and K are the intrinsic growth rate and the carrying capacity, respectively.
Here the per capita growth rate F (N ) defined by the equality in (1.0.2) is a decreasing function
of the population size. For population sizes larger than the carrying capacity the growth rate is
negative whilst for population sizes smaller than the carrying capacity the growth rate is positive.
A population whose growth satisfies (1.0.2) is said to exhibit logistic growth.
To solve (1.0.2) we employ the technique of separation of variables as follows. Let N (t)
satisfy the logistic equation and assume N (t) > 0, N (t) 6= K for all t (this is true as long as
the initial population size satisfies these assumptions), then from (1.0.2) we have
Z Z
dN (t)
= dt, (1.0.3)
N (t) 1 N (t)/K
Note the left hand side of (1.0.3) may be written as, (for brevity we omit the t dependence of N
in the remainder),
Z Z ✓ ◆
dN 1 N/K N/K
= + dN
N 1 N/K N (1 N/K) N (1 N/K)
Z ✓ ◆
1 1
= + dN.
N K(1 N/K)

Thus, (1.0.3) implies

ln(N ) ln(1 N/K) = t + c,

2
for some c 2 R, which gives
N
= Ae t ,
1 N/K
for some A 2 R+ , the general solution of (1.0.2) is therefore given by
AKe t
N= .
K + Ae t
At t = 0,
AK N0 K
N (0) = N0 = =) A= .
K +A K N0
Thus the solution of (1.0.2) with initial population size N0 is given by
KN0 e t
N (t) = . (1.0.4)
K N0 + N0 e t
From (1.0.4) we see that as t ! 1, the population size N (t) ! K, i.e., the population size
saturates at the level K.

N (t)
N0

N0

K/2

N0
t

Figure 1.2: Evolution of the population size under the Logistic ODE (1.0.2) for different initial
population sizes N0 .

Note that from (1.0.2) we see that if 0 < N0 < K (which implies N (t) < K) then dN dt > 0
so the population grows monotonically asymptotically approaching the carrying capacity K. Al-
ternatively if N0 > K then dN dt < 0 so the population decreases monotonically, asymptotically
approaching the carrying capacity K. In the case of 0 < N0 < K we identify two qualitatively
different cases, if N0 < K/2 the solution curve has a sigmoid shape, i.e., slow growth initially,
rapid growth in a middle period followed by slow growth as the population size approaches the
carrying capacity level K, on the other hand if N0 > K/2, the growth simply slows down as the
population size approaches the carrying capacity level K. Figure 1.2 illustrates typical solution
profiles for different steady states N0 .

3
1.1 Steady states and local stability
In the previously considered logistic model (1.0.2), there were two steady states N = 0 and N =
K. Our analysis of Example 1.0.2 suggests that for any initial nonzero (positive) population size
the population converges asymptotically to the carrying capacity K. To deduce this we solved
the ODE analytically, however we now show that we do not need to solve the ODE to investigate
the stability of its steady states.

1.1.1 Definition (Steady states for ODE models). Consider the ODE

dN (t)
= f (N (t)) (1.1.1)
dt
where N : [0, 1) ! R and f : R ! R.
A steady state (or equilibrium fixed point) for the ODE model (1.1.1) is a constant N⇤ such
that
f (N⇤ ) = 0.

1.1.2 Definition (Stability of steady states for ODE models). A steady state N⇤ for an ODE
model of the form (1.1.1) is called

• Stable: if for all " > 0 there exists a > 0 such that |N0 N⇤ | < implies N (t) N⇤ <
" for all t.

• Asymptotically stable: if it is stable and also, N (t) N⇤ ! 0 as t ! 1.

If a steady state is not stable it is called unstable.

We will investigate the stability of steady states for ODEs via linear stability analysis.

Linearisation of ordinary differential equations


Consider the ODE
dN (t)
= f (N (t)) (1.1.2)
dt
We define ⌘(t) := N (t) N⇤ to be the difference between the solution at time t and the steady
state value N⇤ whose stability we wish to investigate. As we are performing linear stability
analysis we will assume ⌘(t) ⌧ 1, i.e., we are interested in the behaviour of solutions close to
the steady state value. As N (t) = N⇤ + ⌘(t), we may write (1.1.2) as

dN (t) d N⇤ + ⌘(t) d⌘(t)


= = = f (N (t)) = f (N⇤ + ⌘(t)).
dt dt dt
Taylor expanding the function f yields

d⌘(t) ⇣ ⌘
= f (N⇤ ) + f 0 (N⇤ )⌘(t) + O ⌘ 2 ,
dt

4
where we have assumed sufficient differentiability of f . As N⇤ is a steady state of the ODE,
f (N⇤ ) = 0, hence neglecting the higher order terms, we have

d⌘(t) 0 (N
⇡ f 0 (N⇤ )⌘(t) =) ⌘(t) = kef ⇤ )t
,
dt
where k 2 R. Thus the perturbation ⌘(t) grows exponentially if f 0 (N⇤ ) > 0 and decays expo-
nentially if f 0 (N⇤ ) < 0. If f 0 (N⇤ ) = 0 we have to consider the higher order terms (O(⌘ 2 ) and
higher).

1.1.3 Definition (Local stability of steady states for ODEs). Consider an ODE of the form
(1.1.2) with steady state N⇤ , we say that the steady state N⇤ is

• (linearly) asymptotically stable if f 0 (N⇤ ) < 0.

• (linearly) unstable if f 0 (N⇤ ) > 0.

Geometric method

dN
dt = f (N )

N
N⇤,1 N⇤,2 N⇤,3 N⇤,4 N⇤,5

Figure 1.3: Geometric approach to stability of the steady states corresponding to solutions of
the equation f (N ) = 0. The steady states N⇤,2,4 are stable as f 0 (N⇤ ) < 0 and the steady states
N⇤,1,3,5 are unstable as f 0 (N⇤ ) > 0.

If f is sufficiently simple, or if we don’t know the exact form of f but we just know its
general shape or qualitative features, then we may deduce stability of the steady states of the
ODE system (solutions of f (N ) = 0) simply by sketching f .
Figure 1.3 illustrates the geometric approach to investigating stability of steady states. The
steady states correspond to the zeros of f . From the sketch we may infer the following

1. Stability of the steady states, (stable if f 0 (N⇤ ) < 0 and unstable if f 0 (N⇤ ) > 0).
dN
2. For a given N if f (N ) > 0 then the population size is increasing as dt > 0, similarly if
f (N ) < 0 then the population size is decreasing as dN
dt < 0.

5
3. The arrows on the x axis indicate the stability of the steady states, i.e., if we perturb the
population from a steady state value then the solution trajectories follow the direction of
the arrows converging to a stable steady state.

1.1.4 Example (Logistic equation revisited). Consider the logistic ODE (1.0.2) with growth rate
and carrying capacity K. We recall that the ODE has two steady states N⇤ = 0 and N⇤ = K.
To check the stability of the steady states we compute
✓ ◆
0 2N
f (N ) = 1 .
K

Hence, f 0 (0) = > 0 so zero is an unstable steady state. Whilst f 0 (K) = < 0 so the steady
state N⇤ = K is asymptotically stable. Alternatively adopting the geometric approach, in Figure

dN
dt = f (N )

N
N⇤ = 0 N⇤ = K

Figure 1.4: Graph of f (N ) for the logistic ODE (1.0.2).

1.4 we sketch a graph of f (N ) for the logistic ODE. We observe the steady state N⇤ = K is
asymptotically stable whilst the steady state N⇤ = 0 is unstable.

1.2 Nondimensionalisation
Dimension is a unit of a physical quantity, for example the dimensions of a variable t represent-
ing time may be seconds, minutes or hours. The dimension of the carrying capacity K in the
logistic ODE (1.0.2) is number of individuals (i.e., it has the same dimensions as the population
size).
In general, before analysing a model it is crucial to formulate it in nondimensional (or di-
mensionless) terms. The main advantage of such an approach is that the subsequent analysis of
the model the units used are no longer important and rather adjectives such as “large”, “small”,
“slow” or “fast” all have a definite relative meaning.
A number is dimensionless if it has no physical units attached to it, i.e., its numerical value
is independent of the system of units used. For example as the carrying capacity K and the pop-
ulation size N both have dimensions number of individuals, the quotient K/N is dimensionless.

6
Nondimensionalisation is the process of changing variables (by scaling) in a model such that
the new variables are dimensionless. Typically it leads to a simpler model with fewer parameters.
For further details on nondimensionalisation and applications of the approach see Segel [1972].
To nondimensionalise a model we proceed along the following steps
1. List all variables, parameters and their dimensions.
2. Introduce characteristic scales for each of the dimensions involved (time, concentration,
etc.) and scale the dimensional variables by these characteristic scales in order to obtain
nondimensional scaled variables. Note in general there is not a single unique way to do
this.
3. Choose the characteristic scale for each of the variables in order to simplify the equation
(typically the scales are chosen such that coefficient of as many terms as possible is one).
4. Rewrite the model in terms of the new nondimensional variables and parameters.
The procedure is best illustrated through worked examples.
1.2.1 Example (Nondimensionalisation of the Logistic model). We once again consider the
logistic ODE (1.0.2).

Variable or Parameter Meaning Dimension


t time T (Units of time, e.g., s, min, hr, etc.)
N population size # (Units of size, e.g., hundreds, millions, etc.)
growth rate T 1
K Carrying capacity #

Table 1.1: Dimensions of the parameters and variables in the logistic ODE.

The model involves four dimensional parameters or variables as shown in Table 1.1. We now
follow the previously outlined steps in order to obtain a nondimensional model. Introducing the
N
characteristic scales [N ] and [t] for the population size and time respectively, we set n = [N ]
and ⌧ = [t]t thus by the chain rule and using the ODE (1.0.2) we have
✓ ◆ ✓ ◆
[N ] dn dN N [N ]n
= = N 1 = [N ]n 1 .
[t] d⌧ dt K K
Cancelling a factor of [N ] from both sides gives
✓ ◆
dn [N ]n
= [t]n 1 .
d⌧ K
1
In order to set as many coefficients as possible to one, we select [t] = and [N ] = K, which
yields the dimensionless logistic equation
dn
= n(1 n).
d⌧

7
In contrast with the original dimensional ODE, the nondimensional form of the logistic equation
contains no parameters and its analysis is therefore (a little) simpler than the dimensional case.

1.2.2 Example (Insect outbreak model (Spruce Budworm)). The spruce budworm is the most
destructive and widely distributed forest defoliator in the North America. Massive budworm
outbreaks occur periodically destroying hundreds of thousands of hectares of valuable fir and
spruce, See http://www.na.fs.fed.us/spfo/pubs/fidls/westbw/fidl-wbw.
htm for more details on the biology.
To develop and investigate management strategies for dealing with the outbreaks, a series of
mathematical models have been produced and studied, starting from the work of Ludwig et al.
[1978].
Let N (t) denote the size of the spruce budworm population, the model considered by Lud-
wig et al. [1978] for the population dynamics is given by the ODE
✓ ◆ ✓ ◆
dN (t) N (t)
= B N (t) 1 P N (t) , (1.2.1)
dt KB
| {z } | {z }
logistic growth predation

where
BN 2
P (N ) = .
A2 + N 2
A goal of their model was to provide a mathematical (qualitative) explanation for budworm pop-
ulation outbreaks where numbers rapidly rise from low to high levels, causing sudden defoliation
of the forest, followed by a sudden collapse in the budworm population back to low levels.
The first term on the right hand side of (1.2.1) models logistic growth of the population of
budworms and the second term models the effects of predation of the population. The predation
effect is assumed to saturate for large population sizes as there is only a finite number of predat-
ors which may feed on the budworms, there is also assumed to be a decreasing effectiveness of
predators as the population size of budworms gets small. A sketch of the predation term P (N )
is shown in Figure 1.5
The interpretation of the parameter values is as follows:

• B is the birth rate.

• KB is the carrying capacity which is governed by the density of foliage available in the
trees.

• A is a parameter which governs the level at which the predation response saturates.

• B is the predation rate.

We next nondimensionalise the model. The dimensional model variables and parameters are
given in Table 1.2

8
P (N )

B/2

N
A

Figure 1.5: Graph of the predation term in the budworm model (1.2.1).

Variable or Parameter Dimension


N #
t T
B T 1
KB #
A #
B #T 1

Table 1.2: Dimensions of the parameters and variables in the logistic ODE.

Setting n = N/[N ] and ⌧ = t/[t] where the characteristic scales [N ] and [t] are to be
determined, the chain rule and the equation (1.2.1) yield,
✓ ◆ ✓ ◆
[N ]dn dN N BN 2 [N ]n B[N ]2 n2
= = BN 1 = B [N ]n 1 ,
[t]d⌧ dt KB A2 + N 2 KB A2 + [N ]2 n2

thus ✓ ◆
dn [N ]n B[t][N ]n2
= B [t]n 1 .
d⌧ KB A2 + [N ]2 n2
We set [N ] = A (recall A and N have units # hence this choice is admissible as n/[N ] is a
nondimensional variable) and [t] = A/B (recall B has units #T 1 hence A/B has units T and
thus T /[t] is a nondimensional variable) which yields
✓ ◆
dn BA An BAAn2
= n 1
d⌧ B KB BA2 1 + n2
✓ ◆
BA An n2
= n 1 .
B KB 1 + n2

9
BA KB
Defining the nondimensional variables := B > 0 and q = A > 0 we arrive at the
nondimensional budworm model
✓ ◆
dn n n2
= n 1 . (1.2.2)
d⌧ q 1 + n2
Note we have reduced the number of parameters in the model from four in the original equation
(1.2.1) to two in the dimensionless equation (1.2.2). The steady states of the nondimensional
budworm model (1.2.2) correspond to
✓ ◆
n⇤ n2⇤
f (n⇤ ) = 0 () n⇤ 1 =0 (1.2.3)
q 1 + n2⇤
✓ ◆
n⇤ n⇤
() n⇤ = 0 or 1 = 0. (1.2.4)
q 1 + n2⇤
⇣ ⌘
We define h(n) := 1 nq and g(n) := 1+n n
2 . To illsutrate the nonzero steady states we

y = g(n) y = g(n)
n⇤,1 n⇤,1
n⇤,2

y = h(n) y = h(n)
n n
q q

n⇤,1 y = g(n) n⇤,2


n⇤,2 y = h(n) y = h(n)
n⇤,3 n⇤,3
y = g(n)
n n
q q

y = h(n)
n⇤,3
y = g(n)
n
q

Figure 1.6: Plots of the functions h and g used to determine the nontrivial steady states in the
budworm model (1.2.2) for increasing values of the nondimensional growth rate reading from
top to bottom.

plot h and g versus n in Figure 1.6 for increasing values of . As increases the number of

10
(nontrivial) steady states (points where h(n) = g(n)) increases from one to two to three, note
that if we increase further we have two steady states once again followed by only a single
steady state.
In order to investigate the stability of the steady states we compute the derivative of the
function f (c.f., (1.2.2))
2n 2n
f 0 (n) = . (1.2.5)
q (1 + n2 )2
Hence as f 0 (0) = > 0, the steady state n⇤ = 0 is unstable.
To investigate the stability of the nonzero steady states, we recall that from (1.2.2), with h
and g as previously defined we have

f (n) = n(h(n) g(n)),

hence dn dn
dt = f (n) < 0 if g(n) > h(n) and dt = f (n) > 0 if g(n) < h(n). Adopting a
geometric approach we sketch f (N ) for the five qualitatively different cases obtained on varying
shown in Figure 1.6. The sketch of f is shown in Figure 1.7 and we observe the changes in
stability of the steady states as the parameter is increased.
We conclude this example with a bifurcation analysis in which we seek to understand how
the qualitative properties of solutions to model (1.2.2) depend on the parameters and q. To
do this we plot a bifurcation diagram in parameter space, i.e., (the q plane). Our previous
analysis indicates there is a transition from one nonzero steady state to two and then three as the
parameter is increased holding q fixed. In fact there is a curve in the parameter space along
which the model transitions from one (nonzero) steady state to three (nonzero) steady states and
along this curve the model has two (nonzero) steady states. To find this curve we note that the
two steady state case corresponds to the case when the previously defined functions h(n) and
g(n) take equal values and are tangent some n > 0. Hence we may find the curve by finding
the values of q, such that for some n > 0, h(n) = g(n) and h0 (n) = g 0 (n). Equality of the
functions yields ✓ ◆
n n
1 = , (1.2.6)
q 1 + n2
and equality of the derivatives implies
1 n2
= 2. (1.2.7)
q 1 + n2

From (1.2.7) we have that q = (n2 1)/(1 + n2 )2 , applying this in (1.2.6) gives
n
=n
1 + n2
q
n2 1 n
=n 2 +
1 + n2 1 + n2
2n3
= 2,
1 + n2

11
dn dn
dt = f (n) dt = f (n)

N⇤,1 N⇤,2
n n
N⇤,0 N⇤,1 N⇤,0

dn
= f (n) dn
dt
dt = f (n)

N⇤,3
n N⇤,3
N⇤,0 N⇤,1 N⇤,2 n
N⇤,0 N⇤,2

dn
dt = f (n)

N⇤,3
n
N⇤,0

Figure 1.7: Geometric approach to stability of the steady states of the budworm model (1.2.2)
for the qualitatively different cases observed for increasing in Figure 1.6. The red steady states
are stable as f 0 (N⇤ ) < 0 and the green steady states are unstable as f 0 (N⇤ ) > 0. Note the grey
steady states corresponds to f 0 = 0 and hence we are unable to ascertain thier stability without
investigating higher order terms.

12
q

A B C D

Figure 1.8: Two parameter bifurcation diagram for the budworm model (1.2.2). In the white
region there is only one nonzero steady state which is stable, in the red region there are three
nonzero steady states two of which are stable.

and hence as q = (1 + n2 )2 /(n2 1) we have q = 2n3 /(n2 1). We plot the two-parameter
bifurcation diagram for the budworm model (1.2.2) in Figure 1.8. The red region in which there
are three nontrivial steady states two of which are stable is separated from the white region in
which there is a single nonzero steady state that is stable by the blue curve corresponding to two
nonzero steady states.
The bifurcation diagram we plotted in Figure 1.8 allows us to understand the population
dynamics in a budworm outbreak using the curve plotted in Figure 1.9. A mathematical explan-
ation for the outbreak based on the model under consideration is as follows

1. Initially, following a previous outbreak or deforestation event, the parameters and q


are small as there is insufficient foliage to sustain a large budworm population. Thus
the model suggests there is a single nonzero steady state corresponding to point A in the
bifurcation diagram and that the budworm population converges to this low level.

2. Gradually the forest recovers and this leads to the production of more foliage which results
in an increase in the parameter (or equivalently q) in the model. (Note this recovery of
the forest is assumed to happen on a slower timescale than the dynamics of the budworm
population hence the variables and q are constants on the timescale of the population
dynamics.)

3. As (or q) increases the small nonzero steady state eventually loses stability at the point
C in Figure1.8 and the budworm population rapidly rises to the large nonzero steady state

13
n D

Figure 1.9: Hysteresis plot illustrating the onset of an outbreak in budworm density when the
small nonzero steady state loses stability as increases at the point C, followed by sudden
decline as the large nonzero steady state loses stability as decreases at the point B.

which is the only stable steady state of the model for these parameters (we are to the right
of the red region in Figure 1.8).

4. The increased budworm numbers lead to defoliation of the forest reducing the parameter
(or equivalently q). However the large nonzero steady state remains stable until we reach
the point B in Figure 1.8 at which the budworm population suddenly falls to the small
nonzero steady state which is now the only stable steady state of the model.

This phenomena is an example of hysteresis: the dependence of the system on its parameters
is not reversible. In this example the reason we observe hysteresis is that in the region where
both the small and the large nonzero steady state are stable (the red region in Figure 1.8), the
behaviour of the system is sensitive to whether the population size is near the lower or the higher
equilibrium value.

14
Chapter 2

Interacting species, continuous time


models

We consider systems of two nonlinear differential equations of the form


(
du
= f (u, v)
dt
dv
(2.0.1)
dt = g(u, v)

which is to be solved subject to the initial conditions u(0) = u0 , v(0) = v0 .


2.0.1 Definition (Steady state). A steady state of (2.0.1) is a pair (u⇤ , v⇤ )T 2 R2 such that
(
f (u⇤ , v⇤ ) = 0
g(u⇤ , v⇤ ) = 0.

2.1 Phase plane analysis


A lot of information regarding solution behaviour can be obtained by sketching the solution
trajectories in the (u, v)-plane (also known as the phase plane).
2.1.1 Definition (Nullclines). u-nullclines are a set of points in the phase plane on which du
dt =
0. Geometrically these are the points where the vectors corresponding to the solution trajectories
are vertical, going either up ( dv dv
dt > 0) or down ( dt < 0). Algebraically we find the u-nullcline
by solving f (u, v) = 0.
Similarly, v-nullclines are a set of points in the phase plane on which dv
dt = 0. Geometrically
these are the points where the vectors corresponding to the solution trajectories are horizontal,
going either right ( du du
dt > 0) or left ( dt < 0). Algebraically we find the v-nullcline by solving
g(u, v) = 0.
2.1.2 Example (Linear ODE system). Consider the system of ODEs
(
du
dt = u v = f (u, v)
dv
dt = u = g(u, v)

15
v dv du
dt =0 dt =0

Figure 2.1: Nullclines for example 2.1.2.

The nullclines are given by


(
u-nullcline = {(u, v) 2 R2 |u v = f (u, v) = 0} = {(u, v) 2 R2 |v = u}
v-nullcline = {(u, v) 2 R2 |u = g(u, v) = 0} = {(u, v) 2 R2 |u = 0}

The sketch of the nullclines is shown in Figure 2.1. Points where the u-nullcline and v-nullcline
intersect satisfy du dv
dt = dt = 0 and are thus steady states of the ODE system.
We can also include information on solution trajectories away from the nullclines as
(
du > 0 if u > v
=u v
dt < 0 if u < v
(
dv > 0 if u > 0
=u
dt < 0 if u < 0
2.1.3 Example (Nonlinear system).
(
du
dt = u(1 u) = f (u, v)
dv
dt = v = g(u, v)

The nullclines are given by


(
u-nullcline = {(u, v) 2 R2 |u = 0 or u = 1}
v-nullcline = {(u, v) 2 R2 |v = 0}

16
Steady states: (u⇤ , v⇤ )T = (0, 0)T and (1, 0)T .
(
du > 0 if 0 < u < 1
= u(1 u)
dt < 0 if u < 0 or u > 1
(
dv > 0 if v < 0
= v
dt < 0 if v > 0

2.2 Linearisation
Let (u⇤ , v⇤ )T be a steady state of (2.0.1). As previously we are interested in solution behaviour
near the steady state. Let

u(t) = u⇤ + ⌘(t) and v(t) = v⇤ + (t), where | | , |⌘| ⌧ 1

Substituting into (2.0.1) gives


du du⇤ + ⌘ d⌘
= = = f (u⇤ + ⌘, v⇤ + )
dt dt dt
dv dv⇤ + d
= = = g(u⇤ + ⌘, v⇤ + ).
dt dt dt
Assuming f, g are sufficiently smooth, Taylor expansion gives
@ @
f (u⇤ + ⌘, v⇤ + ) = f (u⇤ , v⇤ ) + f (u⇤ , v⇤ )⌘ + f (u⇤ , v⇤ ) + O(⌘ 2 , 2 )
@u @v
@ @
g(u⇤ + ⌘, v⇤ + ) = g(u⇤ , v⇤ ) + g(u⇤ , v⇤ )⌘ + g(u⇤ , v⇤ ) + O(⌘ 2 , 2 ).
@u @v
Thus
d⌘ @ @
= f (u⇤ , v⇤ ) + f (u⇤ , v⇤ )⌘ + f (u⇤ , v⇤ ) + O(⌘ 2 , 2 )
dt @u @v
d @ @
= g(u⇤ , v⇤ ) + g(u⇤ , v⇤ )⌘ + g(u⇤ , v⇤ ) + O(⌘ 2 , 2 ).
dt @u @v
Neglecting the higher order terms (that are small) and noting that (u⇤ , v⇤ ) is a steady state and
hence f (u⇤ , v⇤ ) = g(u⇤ , v⇤ ) = 0 gives the linearised ODE system for the approximate perturb-
ations ⌘˜ ⇡ ⌘ and ˜ ⇡ ,

⌘ @ @
= f (u⇤ , v⇤ )˜
⌘+ f (u⇤ , v⇤ ) ˜
dt @u @v
d˜ @ @
= g(u⇤ , v⇤ )˜⌘+ g(u⇤ , v⇤ ) ˜.
dt @u @v
We may write the above system in matrix vector form as
dz
= J z, (2.2.1)
dt

17
⌘ , ˜)T and
where z = (˜ !
@ @
J= @u f (u⇤ , v⇤ ) @v f (u⇤ , v⇤ ) .
@ @
@u g(u⇤ , v⇤ ) @v g(u⇤ , v⇤ )

Our ansatz for solutions to (2.2.1) is


z = ce t , (2.2.2)
where c 2 R2 and 2 C both of which are to be found. Substituting the ansatz (2.2.2) into
(2.2.1) gives
t t
ce = J ce =) J c = c, (2.2.3)
i.e., the ’s correspond to the eigenvalues of the matrix J and the c’s are the corresponding
eigenvectors. Let !
a b
J= ,
c d
then the eigenvalues are found by solving
!
a b 2
det(J I) = det = (a + d) + ad bc.
c d

Thus the eigenvalues solve a polynomial corresponding to


p
2 Tr(J ) ± Tr(J )2 4 det(J)
Tr(J ) + det(J ) = 0 =) 1,2 = .
2
If 1 6= 2 and 1, 2 6= 0 any solution of (2.2.1) is of the form
1t 2t
z(t) = A1 c1 e + A2 c2 e , (2.2.4)

where the constants A1 and A2 are determined by the initial conditions.


If 1 = 2 6= 0, the general solution is given by
1t 1t
z(t) = A1 c1 e + A2 (d + c1 t) e , (2.2.5)

for some d 2 R2 that must be determined.


We can categorise the solutions to (2.2.1) into different types depending on the eigenvalues
of the matrix J .

18
1. Tr(J ) > 0, det(J ) > 0 with (Tr(J ))2 > 4 det(J ).
In this case, 0 < 1 < 2 . From (2.2.4) we see that z(t) grows exponentially in time.
The steady state in this case is an unstable node.

2. Tr(J ) < 0, det(J ) > 0 with (Tr(J ))2 > 4 det(J ).


In this case, 2 < 1 < 0. From (2.2.4) we see that z(t) decays exponentially in time.
The steady state in this case is a stable node.

3. Tr(J ) > 0, det(J ) < 0 gives 2 <0< 1

and Tr(J ) < 0, det(J ) < 0 gives 1 <0< 2

In this case as z(t) = A1 c1 e 1t + A2 c 2 e 2t ,


the solution grows exponentially in the direction ci for which i > 0 and decays expo-
nentially in the other direction.
Hence we have a saddle point. Such a point is said to be unstable.

4. (Tr(J ))2 < 4 det(J ).


In this case 1, 2 are complex. We may write the solution as

z(t) = A1 c1 e↵t ei t + A2 c2 e↵t e i t , (2.2.6)


⇣p ⌘
where ↵ = Tr(J )/2 and = Tr(J )2 + 4 det(J ) /2.
Note (2.2.6) may be written as

z(t) = e↵t a cos( t) + b sin( t)

where a, b 2 R2 . Thus solutions of the form (2.2.6) give oscillatory behaviour.


If Tr(J ) > 0 then the e↵t term in (2.2.6) leads to exponential growth and the node is an
unstable spiral.
If Tr(J ) < 0 then the e↵t term in (2.2.6) leads to exponential decay and the node is a
stable spiral.

5. Tr(J ) = 0, det(J ) > 0. In this case 1 and 2 are purely imaginary.


The solution satisfies (2.2.6) with ↵ = 0 and hence we expect the behaviour to be purely
oscillatory. This represents periodic solutions and the steady state is said to be a center.

6. Tr(J )2 = 4 det(J ) 6= 0. In this case, 1 = 2 = Tr(J )/2.


The solution now satisfies (2.2.5). We deduce that the steady state is a stable node if
Tr(J ) < 0 and an unstable node if Tr(J ) > 0

7. det(J ) = 0. In this case, at least one eigenvalue is zero, i.e. 1 = 0 or 2 = 0.


Here, linear stability analysis is inconclusive and different methods will need to be used
to determine the stability of the steady state. These are special cases that will not be
considered further within this lecture course.

19
2.2.1 Example. (
du
dt =u v
dv
dt = u.

Steady state (u⇤ , v⇤ )T = (0, 0)T .


!
1 1
J (0, 0) = .
1 0

Eigenvalues solve
2
det(J (0, 0) I) = (1 )( )+1= + 1 = 0.
p p
Hence 1.2 = 1± 21 4 = 1±2 3i .
As Tr(J )2 = 1 < 4 = 4 det(J ), the eigenvalues are complex (as we see above) and as
Tr(J ) > 0, we have that Re( ) > 0 (once again as we see above) and the steady state is an
unstable spiral.

2.3 Lotka-Volterra, Predator-Prey equations


Consider the interaction of a predator v with a prey u. The Lotka-Volterra interaction incorpor-
ates the following main components

• Prey limited only by predation (in the absence of v, u grows exponentially).

• The functional response of the predation is linear (predation term is linear in u).

• There is no interference between predators (predation term is linear in v).

• In the absence of prey, the predators die off exponentially.

Schematic of the model


(
rate of change of u = net rate of growth of u without predation rate of loss due to predation
rate of change of v = net rate of growth due to predation net rate of loss of v without prey.

Based on the modelling ingredients described above, we obtain the following system of equa-
tions for the evolution of u and v
8
>
> du
= |{z}au buv
>
> |{z}
< dt
Expo. growth Predation
dv (2.3.1)
>
> = |{z}
cuv dv,
>
>
dt |{z}
: Predation Expo. decay

where a, , b, c, d > 0.

20
Steady states (u⇤ , v⇤ )T satisfy
(
u⇤ (a bv⇤ ) = 0
v⇤ (cu⇤ d) = 0.

Hence (u⇤ , v⇤ )T = (0, 0)T or (u⇤ , v⇤ )T = (d/c, a/b)T


u-nullclines: u = 0, v = a/b.
v-nullclines: v = 0, u = d/c.
!
a-bv -bu
J (u, v) = .
cv cu-d

Thus for the steady state (0, 0)T


!
a 0
J (0, 0) = .
0 -d

Hence 1 = a > 0 and 2 = d < 0 so the steady state is a saddle point.


For the steady state (d/c, a/b)T
! !
a-ba/b -bd/c 0 -bd/c
J ⇤ := J (d/c, a/b) = = .
ca/b cd/c-d ca/b 0

Hence eigenvalues solve


2 2
det(J ⇤ I) = + (bd/c)(ca/b) = ad = 0.
p
Thus 1,2 = ±i ad
Thus the steady state (d/c, a/b)T is a centre. (Note we have Tr(J ⇤ ) = 0 and det(J ⇤ ) > 0
from which we could have concluded the steady state is a centre).
A more realistic assumption than exponential growth of the prey in the absence of predators
may be to assume their growth in the absence of predators is logistic.
Thus (2.3.1) becomes (
du u
= au 1 K buv
dt
dv
(2.3.2)
dt = cuv dv,
where a, b, c, d and K are positive constants.
Steady states (u⇤ , v⇤ )T satisfy
(
u⇤ (a 1 uK⇤ bv⇤ ) = 0
v⇤ (cu⇤ d) = 0.

Hence if u⇤ = 0 =) v⇤ = 0 and (u⇤ , v⇤ )T = (0, 0)T is a trivial steady state. If u⇤ 6= 0,


(
a 1 uK⇤ bv⇤ = 0
v⇤ (cu⇤ d) = 0,

21
v

a/b

a/b(1 d/(cK))

u
d/c K

Figure 2.2: Null clines for system (2.3.2).

and hence (u⇤ , v⇤ )T = (K, 0)T is a steady state and (u⇤ , v⇤ )T = (d/c, (a/b)(1 d/(cK)))T is
also a steady state (biologically relevant if d/(cK) > 1, i.e., if K > d/c). For the remainder let
us assume the parameters are such that K > d/c and hence we have three steady states.
We now conduct a phase plane analysis of system (2.3.2). The null clines correspond to
u-nullclines: {u a au K bv } = 0, =) u = 0 or v = ab (1 u/K).
v-nullclines: {v (cu d)} = 0, =) v = 0 or u = d/c.
A sketch of the null clines is shown in Figure 2.2.
The Jacobian is given by
!
a 2a K u bv bu
J (u, v) = .
cv cu d

At (u⇤ , v⇤ ) = (0, 0) !
a 0
J (0, 0) = ,
0 d

thus 1 = a and 2 = d and hence the steady state is a saddle point. At At (u⇤ , v⇤ ) = (K, 0)
!
a bK
J (K, 0) = ,
0 cK d

22
thus 1 = a and 2 = cK d > 0 (by assumption) and hence the steady state is a saddle
point. At At (u⇤ , v⇤ ) = (d/c, a/b(1 d/(cK)))
0 1
ad bd
J ⇤ := J (d/c, a/b(1 d/(cK))) = @ ⇣ cK ⌘ c A.
a d
b c K 0

ad
We have that Tr(J ⇤ ) = cK and
✓ ◆ ✓ ◆! ✓ ◆ ✓ ◆
bd a d ad d d
det(J ⇤ ) = c = c = ad 1 >0
c b K c K cK
d
as cK < 1 by assumption (note all parameters are strictly positive as well). Thus the steady
state is stable. It is either a stable spiral or a a stable node dependent on the choice of the
parameters. (Note that unlike the previous case the nontrivial steady state is not a centre as it is
asymptotically stable).
Note that the addition of self regulation in the prey equation has led to a dramatic change
in stability behaviour of the nontrivial steady state. This is typical of many nonlinear systems
of ODEs in that minor changes to the equations can led to major changes in the asymptotic
behaviour and such a feature is sometimes used as a criticism of the plausibility of the Lotka-
Volterra model.

2.4 Competition models


The principle of competitive exclusion says that if two species occupy the same ecological niche
then one them will be driven to extinction.
We have previously encountered examples of intraspecies competition. For example given
two species u1 , u2 we could consider dynamics corresponding to two (decoupled) scalar logistic
equations. For i = 1, 2 ✓ ◆
dui ui
= i ui 1 ,
dt Ki
ui
where i , Ki > 0 for i = 1, 2. The term K i
represents the effects of intraspecies competition.
To obtain a simple model for interspecies competition, we could introduce competition
between species of exactly the same functional form, i.e., we could consider a system given
by 8 ⇣ ⌘
> du1
< u1 ↵u2
dt = u
1 1 1 K1 K1 ⌘
⇣ (2.4.1)
> du2
: dt = 2 u2 1 K2 u2 u1
K2 ,

where ↵, > 0 are called the competition coefficients of u2 and u1 respectively. For species
that inhabit the same ecological niche, generally ↵ = 1. We nondimensionalise (2.4.1) using
the following scalings u = u1 /K1 , v = u2 /K2 and ⌧ = 1 t to obtain
✓ ◆
du1 du K1 u K2 v
= K1 1 = 1 K1 u 1 ↵ .
dt d⌧ K1 K1

23
Thus ✓ ◆
du K2 v
=u 1 u ↵ ,
d⌧ K1
similarly we obtain
✓ ◆
du2 dv K2 v K1 u
= K2 1 = K
2 2 v 1 .
dt d⌧ K2 K2
Thus ✓ ◆
dv 2 K1 u
= v 1 v .
d⌧ 1 K2
Defining the dimensionless parameters a = ↵K2 /K1 > 0, b = K1 /K2 > 0 and c = 2/ 1 >
0 we obtain the nondimensional system
(
du
= u (1 u av)
d⌧
dv
(2.4.2)
d⌧ = cv (1 v bu) .

The steady states satisfy (


u⇤ (1 u⇤ av⇤ ) =0
cv⇤ (1 v⇤ bu⇤ ) = 0,
hence the (semi-) trivial steady states correspond to

(u⇤ , v⇤ ) = (0, 0), (0, 1) and (1, 0).

Furthermore we have a nontrivial (coexisting) steady state if u⇤ , v⇤ 6== 0 and (1 u⇤ av⇤ ) =


(1 v⇤ bu⇤ ) = 0. Thus for the nontrivial steady state we have
1 b
u⇤ = 1 av⇤ and thus1 v⇤ bu⇤ = 1 v⇤ b(1 av⇤ ) = 1 b v⇤ (1 ab) = 0 =) v⇤ = .
1 ab
Thus for the nontrivial steady state,
✓ ◆
1 b 1 ab (a ab) 1 a
u⇤ = 1 av⇤ =) u⇤ = 1 a = = .
1 ab 1 ab 1 ab
⇣ ⌘
Thus the nontrivial steady state is given by (u⇤ , v⇤ ) = 11 a 1
ab , 1
b
ab . For biological relevance
we require that this state is positive which holds if either

a < 1 and b < 1,

or
a > 1 and b > 1.
The null clines of system (2.4.2) are given by
u null cline: {u = 0} and {v = 1 a u }.
v null cline: {v = 0} and {v = 1 bu}.

24
Excercise: Sketch the null clines in the phase plane together with solution trajectories on
the null clines for the cases (a < 1, b < 1), (a > 1, b < 1), (a < 1, b > 1) and (a > 1, b > 1).
To determine the stability of the steady states we compute the jacobian matrix of the system
(2.4.2), !
1 2u av au
J (u, v) = .
cbv c 2cv cbu
Hence !
1 0
J (0, 0) = .
0 c
Thus 1 = 1 and 2 = c > 0 and the steady state is an unstable node.
!
1 a 0
J (0, 1) = ,
0 c

Thus 1 = 1 a and 2 = c < 0 and so the steady state (0, 1) is a stable node if a > 1 and is
a saddle if a < 1. !
1 a
J (1, 0) = ,
0 c cb

Thus 1 = 1 and 2 = c(1 b) and so the steady state (1, 0) is a stable node if b > 1 and is a
saddle if b < 1. Finally,
0 ⇣ ⌘ ⇣ ⌘ ⇣ ⌘ 1
✓ ◆ 1 a 1 b 1 a
1 a 1 b B 1 2 1 ab⇣ a
⌘ 1 ab ⇣
a
⌘1 ab ⇣ ⌘ C
J ⇤ := J , =@ A
1 ab 1 ab cb 1 b
c 2c 1 b
cb 1 a
1 ab 1 ab 1 ab
!
a 1 a(a 1)
= 1 ab 1 ab .
cb(b 1) c(b 1)
1 ab 1 ab

a 1+c(b 1)
Hence Tr(J ⇤ ) = 1 ab and

c(a 1)(b 1) abc(a 1)(b 1) c(1 ab)(a 1)(b 1) c(a 1)(b 1)


det(J ⇤ ) = 2 2
= = .
(1 ab) (1 ab) (1 ab)2 1 ab
For stability we consider the two biologically relevant cases separately:
1. a < 1, b < 1: Tr(J ⇤ ) < 0 and det(J ⇤ ) > 0 so the steady state is stable. (Details, e.g.,
sprial or stable node depend on the parameters).

2. a > 1, b > 1: Tr(J ⇤ ) < 0 and det(J ⇤ ) < 0 so the steady state is a saddle (eigenvalues
must have different signs).
Note the above results imply that if a > 1, b < 1 the steady state (u⇤ , v⇤ ) = (0, 1) is the
only stable node and the species v always “wins”. Alternatively if a < 1, b > 1 the steady state
(u⇤ , v⇤ ) = (1, 0) is the only stable node and the species u always “wins”.

25
Recall our scaling was such that a = ↵K2 /K1 and b = K1 /K2 in particular which species
“wins” in the case that one of a or b is less than one and the other is greater than one depends
only on the interspecies competition rates ↵, and the carrying capacities K1 , K2 . In particular
there is no competitive advantage to increasing your growth ⇣ rate. ⌘
1 a 1 b
If a < 1, b < 1 the coexisting steady (u⇤ , v⇤ ) = 1 ab , 1 ab state is the only stable
steady state. In this case interspecies competition is not too strong and the system asymptotically
approaches a nonzero equilibrium value for both species that is smaller for both species than the
asymptotic equilibrium values for both species in the absence of competition (1, 1).
If a > 1, b > 1 the system is bistable (both (1, 0) and (0, 1) are stable). The eventual winner
in this case depends on the initial conditions.
Note that the coexisting steady state exists (in a biologically relevant range) only if a, b < 1
or a, b > 1, from the scaling we have that
K2 K1
ab = ↵ =↵ ,
K1 K2
recalling that for species in the same ecological niche we generally take ↵ = 1, we note that
for such species ab = 1. Thus we conclude that in such a setting where the species occupy the
same niche and ↵ = 1, the mathematical model reflects the principle of competitive exclusion,
i.e., one of the species is always driven to extinction.

2.5 Hopf bifurcations


The term Hopf bifurcation (or Poincaré-Andrnov-Hopf bifurcation) refers to the local birth or
death of a periodic solution from an equilibrium solution as a parameter in a model crosses a
critical value. In systems of differential equations such as those considered in this Chapter, a
Hopf bifurcation occurs when a pair of complex conjugate eigenvalues (with nonzero real part)
of the Jacobian matrix evaluated at a steady state cross the imaginary axis with nonzero speed.
Note in this case the bifurcation is such that a limit cycle is born.

2.5.1 Example. Consider the following system


(
dx
dt = ↵x y x x2 + y 2
dy
dt = x + ↵y y x2 + y 2 .

The steady state of this system corresponds to (x⇤ , y⇤ )T = (0, 0)T . The Jacobian matrix of the
system is given by !
↵ 3x2 y 2 1 2xy
J (x, y) = .
1 2xy ↵ x2 3y 2

Thus at the steady state (0, 0)T


!
↵ 1
J (0, 0) = .
1 ↵

26
The eigenvalues solve
!
↵ 1 2
det(J (0, 0) I) = 0 =) det = 0 =) 2↵ + ↵2 + 1 = 0.
1 ↵

Thus p
2↵ ± 4↵2 4(↵2 + 1) p
1,2 = =↵± 1 = ↵ ± i.
2
Hence as ↵ approaches 0 from below a pair complex conjugate eigenvalues become purely
imaginary and we have a Hopf bifurcation. At the point ↵ = 0,

1,2 = ±i.

Thus recalling (2.2.6) the (linearised) solution may be written as

z(t) = a cos(t) + b sin(t).

Hence the solutions are expected to be periodic with period 2⇡. Note that in general the period
of solutions is 2⇡/ Im( ) .
Finally, let us consider the case ↵ > 0 In this case the steady state (0, 0)T is unstable as
Re( 1,2 ) = ↵ > 0.

Can we say something about the dynamics of the system in this parameter regime, where
↵ > 0? The answer is: yes, we can. To do so, we will require the Poincaré-Bendixson theorem

2.6 Theorem (Poincaré-Bendixson). The Poincaré-Bendixson theorem states that:


In a two dimensional system (
dx
dt = f (x, y)
dy
dt = g(x, y),

if a trajectory is confined to a closed bounded region that contains no steady states, then the
trajectory must converge on a closed orbit.

2.6.1 Remark (Usage and implications of the Poincaré-Bendixson theorem). In order to apply
the above Theorem, we must determine sufficient conditions such that a trajectory is confined to
a closed bounded region. Such a condition is given by the following;
If we can find a confined set (or trapping region), R such that trajectories flow inward every-
where on the boundary of R, then all trajectories in R must be confined.
If R contains no steady states then the Poincaré-Bendixson theorem ensures that such a
region R must contains a closed orbit.

Let n be the outward normal to the boundary of the trapping region. We require
!
dx
n· dt <0
dy
dt

27
for all points (x, y)T on the boundary of R. If this inequality holds at point on the boundary it
⇣ ⌘T
dy
implies that the velocity vector of a solution trajectory dx ,
dt dt points towards the interior of
R.
How can we use the Poincaré-Bendixson theorem to learn more about example 2.5.1, for the
case where ↵ > 0? Consider the set in phase space

A = {(x, y) : d1 < x2 + y 2 < d2 },

where 0 < d1 < ↵ < d2 . Then the set A is a trapping region for the system since, on the inner
boundary x2 + y 2 = d1 , the outward pointing normal is given by n(x, y) = ( x, y)T . Thus
!
dx
n· dt
dy = ↵x2 +yx+x2 (x2 +y 2 ) xy ↵y 2 +y 2 (x2 +y 2 ) = (x2 +y 2 )(x2 +y 2 ↵) = d1 (d1 ↵) < 0,
dt

as d1 < ↵ by assumption. Similarly on the outer boundary x2 + y 2 = d2 , the outward pointing


normal is given by n(x, y) = (x, y)T and
!
dx
n· dt = d2 (↵ d2 ) < 0,
dy
dt

as d2 > ↵ by assumption. Hence A is a trapping region for the system. As the system has no
steady state in the region A the Poincaré-Bendixson theorem implies that solutions starting in
the region A must converge to a closed orbit.
We can even go a step further. For any initial condition inside the plane, we can define the
region A such that it includes that initial condition by simply choosing d1 and d2 appropriately.
Hence, all solutions of this dynamical system will converge to a limit cycle in the long run. In
general, whenever the conditions for a Hopf bifurcation are met, we can expect there to be at
least a small region of parameter space, on the right hand side of the imaginary axis, in which
solutions that start close to the relevant steady state converge to a limit cycle.

28
Chapter 3

Delay differential equations

An assumption of the ODE models considered in Chapter 1 is that the rate of change of the
population at the present time depends only on the population level at the present time itself. This
does not take into account factors such as gestation (the period an organism spends developing
in the womb) or the time taken to reach sexual maturity (i.e., how long it takes the organism to
become capable of reproducing). One possible means of including such features in a continuous
time setting is to consider delay differential equations. In such a model, the growth rate at the
present time depends on the state of the population at earlier times in the past.
We consider here delay differential equations of the form
dN (t)
= f (N (t), N (t ⌧ )),
| dt
{z } | {z }
function of N at time t and N at time t ⌧
rate of change of N at time t

for some fixed delay ⌧ > 0. Note that we need to specify data N (t) for all t 2 [ ⌧, 0] and not
just N (0) as was the case for ODE models.
3.0.1 Example (Logistic model with delay).
✓ ◆
dN (t) N (t ⌧ )
= rN (t) 1 , r, K > 0. (3.0.1)
dt K
Suppose that at t = t1 , N (t1 ) = K and that for t < t1 , N (t) < K, for example as in Figure
3.1, then N (t1 ⌧ ) < K, hence
✓ ◆
N (t1 ⌧ )
1 >0
K
thus from (3.0.1)
✓ ◆ ✓ ◆
dN (t1 ) N (t1 ⌧ ) N (t1 ⌧ )
= rN (t1 ) 1 = rK 1 > 0.
dt K K
dN (t)
By the same argument dt > 0 and N is increasing for all t 2 [t1 , t1 + ⌧ ). At the point t1 + ⌧
we have that ✓ ◆
dN (t1 + ⌧ ) N (t1
= rN (t1 + ⌧ ) 1 = 0,
dt K

29
N (t)

t
t1

Figure 3.1: Initial data for the delayed Logistic equation.

since N (t1 ) = K. Thus N (t) has a critical point at t = t2 = t1 + ⌧ ). Hence the evolution is as
in Figure 3.2.

N (t)

t
t1 t2 = t1 + ⌧

Figure 3.2: Evolution of a solution of the delayed Logistic equation.

Investigating further we see that this critical point is a local maximum since for t = t2 + ✏,
with ✏ small and positive, we have that
✓ ◆
dN (t) N (t2 + " ⌧ )
= rN (t) 1 < 0,
dt K
| {z }
<0

since N (t2 + " ⌧ ) = N (t) for some t 2 (t1 , t1 + ⌧ ) and hence N (t2 + " ⌧ ) > K.
Continuing the argument we see that N keeps decreasing and there exists a time t3 > t1 + ⌧
at which point N (t3 ) = K, but N (t3 ⌧ ) > K so dN dt < 0. When t = t4 = t3 + ⌧ ,
dN
N (t ⌧ ) = K and hence dt = 0 and N has a critical point. As above one can show this point
is a local minimum and the evolution is as depicted in Figure 3.3.
Continuing in this fashion we see that we have an oscillatory solution N (t) which increases
from K to its local maximum in time t2 t1 = ⌧ and decreases from K to its local minimum in
time t4 t3 = ⌧ and hence we might expect an oscillatory solution with period 4⌧ as depicted
in Figure 3.4
Note, however, this argument is a little rough and we do not in fact have precise information
on the time t3 where the solution declines from its maximum value at N (t2 ) to N (t3 ) = K,
but never the less, numerical simulations of (3.0.1) reflect the behaviour our argument above
suggest, oscillations with period governed by the (nondimensional) delay.

30
N (t)

t
t1 t2 = t1 + ⌧ t3 t4 = t3 + ⌧

Figure 3.3: Evolution of a solution of the delayed Logistic equation.

N (t)

t
t1 t1 + ⌧ t1 + 2⌧ t1 + 3⌧ t1 + 4⌧

Figure 3.4: Evolution of a solution of the delayed Logistic equation.

3.1 No oscillations without delay


It was not a coincidence that we did not observe oscillatory solutions when studying the ODEs of
Chapter 1. We now show that for a large class of ODEs (including all the models we encountered
so far in this course) oscillatory solutions are not possible.

3.1.1 Definition (Periodic function). A function N : R ! R is said to be periodic with period


T > 0 if

1. N (t + T ) = N (t) for all t 2 R and

2. T is the smallest value for which the above condition holds.

Note that according to the above definition, steady states are not periodic functions since we
do not regard constant functions as periodic (T > 0).

3.2 Theorem (No oscillations in ODEs without delay). If N (t) is a solution of the ODE

dN (t)
= f (N (t)) (3.2.1)
dt
with f : R ! R continuous, then N (t) can not be periodic.

Proof. We prove the result via contradiction. Assume the result holds and that the ODE (3.2.1)
has a periodic solution with period T . Multiplying both sides of (3.2.1) by dN
dt and integrating

31
over one period (0, T ), say, gives
Z T ✓ ◆2 Z T
dN dN
= f (N ) (3.2.2)
0 dt 0 dt

As we know that N (t) is a periodic solution, and hence not constant, we have that the left hand
side of the above satisfies Z T✓ ◆
dN 2
>0
0 dt
⇣ ⌘2
since dN dt is strictly positive on some portion of the period (0, T ). On the other hand invest-
igating the right hand side of (3.2.2), we see that by a change of variables
Z T Z N (T ) Z N (0)
dN
f (N ) dt = f (N )dN = f (N )dN = 0
0 dt N (0) N (0)

where we have used the fact that N (t) is periodic with period T and hence N (T ) = N (0) as
well as the fact that the integral of a function over a set of measure zero is zero in the last two
steps. Thus we arrive at a contradiction since the LHS and RHS of (3.2.2) can not be equal.

3.3 Linear stability analysis


The arguments above suggest that delays can induce oscillations in models which preclude os-
cillations in the absence of delays. We can see this fact by carrying out a linear stability analysis
of the delayed logistic equation (3.0.1). We know that in the absence of delays, i.e., if ⌧ = 0, that
(3.0.1) has two steady states N ⇤ = 0 or N ⇤ = K and that 0 is unstable and K is asymptotically
stable.
We will use linear stability analysis to show that if the delay ⌧ is sufficiently large, solutions
to (3.0.1) exhibit oscillations and that the steady state N ⇤ = K is unstable whilst if the delay ⌧
is small the steady state N ⇤ = K remains stable.
Before dealing with the linear stability analysis of (3.0.1) let us discuss the linear stability
analysis of a general delay differential equation of the form

dn(t)
= f (n(t), n(t ⌧ )),
dt
that has a constant solution (steady state) n(t) = n⇤ . Consider a perturbation from the steady
state, ⌘(t) such that ⌘(t) ⌧ 1, then

n(t) = n⇤ + ⌘(t)

satisfies
dn d(n⇤ + ⌘) d⌘
= = .
dt dt dt

32
From the equation using Taylor expansion we have
dn
= f (n⇤ +⌘(t), n⇤ +⌘(t ⌧ )) = f (n⇤ , n⇤ ) +⌘(t)@1 f (n⇤ , n⇤ )+⌘(t ⌧ )@2 f (n⇤ , n⇤ )+O(⌘ 2 ),
dt | {z }
=0

where @i denotes the partial derivative with respect to the i0 th argument.


Hence if we neglect (the small) the higher order terms we obtain the linearised delay differ-
ential equation governing the evolution of the perturbations
d⌘(t)
= ⌘(t)@1 f (n⇤ , n⇤ ) + ⌘(t ⌧ )@2 f (n⇤ , n⇤ ). (3.3.1)
dt
3.3.1 Example (Linear stability analysis of the delayed logistic equation). We start by nondi-
mensionalising. Let
N̂ = N/[N ] t̂ = t/[t] ⌧ˆ = ⌧ /[t]
then (3.0.1) becomes
!
[N ] dN̂ (t̂) [N ]N̂ (t̂ ⌧ˆ)
= r[N ]N̂ (t̂) 1 ,
[t] dt̂ K

which can be simplified to


!
dN̂ (t̂) [N ]N̂ (t̂ ⌧ˆ)
= r[t]N̂ (t̂) 1 .
dt̂ K

We set [N ] = K and [t] = 1/r which gives

dN̂ (t̂) ⇣ ⌘
= N̂ (t̂) 1 N̂ (t̂ ⌧ˆ) .
dt̂
For notational ease, we now drop the carets and omit the argument if it is simply t and consider
the nondimensional delayed logistic equation
dN
=N 1 N (t ⌧ ) =: f (N, N (t ⌧ )). (3.3.2)
dt
Steady states are N ⇤ 2 R such that f (N ⇤ , N ⇤ ) = 0. Thus N ⇤ = 0 or N ⇤ = 1.
For linear stability analysis from (3.3.1) we must compute

@1 f (N ⇤ , N ⇤ ) = (1 N ⇤) and @2 f (N ⇤ , N ⇤ ) = N ⇤.

Hence at the steady state N ⇤ = 0 we have that perturbations from the steady state satisfy
d⌘
= ⌘(t)(1 0) + ⌘(t ⌧ ) · 0 = ⌘(t),
dt
multiplying by e gives
t

✓ ◆ t
t d⌘ d ⌘e t
e ⌘ = 0 =) = 0 =) ⌘(t)e = ⌘(0).
dt dt

33
y=

y= e 0.3

y= e /e
y= e

Figure 3.5: Sketch illustrating that we may have either 0, 1 or 2 real solutions to (3.3.3) depend-
ing on the size of the delay.

Thus ⌘(t) = ⌘(0)et and the perturbations grow exponentially so the steady state N ⇤ = 0 is
unstable (as was the case without delay).
At the steady state N ⇤ = 1 we have
d⌘
= ⌘(t)(1 1) + ⌘(t ⌧ ) · ( 1) = ⌘(t ⌧ ).
dt
We look for solutions to the above equation of the form ⌘(t) = ⌘0 e t which gives
d⌘ t dn (t ⌧ )
= ⌘0 e and = ⌘0 e = ⌘0 e t e ⌧
.
dt dt
Thus
t
⌘0 e = ⌘0 e t e ⌧
=) = e ⌧
. (3.3.3)
For stability we require Re( ) < 0.
We split the remainder of the analysis into two cases

Case 1 2 R. We note that in this case (3.3.3) has no solutions unless the delay is sufficiently
small! (see Figure 3.5). In any case if a solution exists we note that e ⌧ > 0 and hence
< 0. Thus the perturbation dies away and the steady state is stable.

Case 2 2 C. Our previous analysis suggested we should see oscillations that do not decay.
We recall from previous chapters that such oscillations are associated with 2 C with

34
Im( ) 6= 0 and Re( ) > 0. Note that in this case the ’s in this case always come in
complex conjugate pairs.
Let 2 C = µ + i! with ! 6= 0 then from (3.3.3) we have
(µ+i!)⌧ µ⌧ µ⌧
µ+i! = e = e cos( !⌧ ) + i sin( !⌧ ) = e cos(!⌧ ) i sin(!⌧ ) .
Equating real and imaginary parts gives
µ⌧ µ⌧
µ= e cos(!⌧ ) !=e sin(!⌧ ). (3.3.4)
For stability we require
µ⌧
Re( ) = µ = e cos(!⌧ ) < 0.
Thus, we need
⇡ ⇡
cos(!⌧ ) > 0 =) < !⌧ < .
2 2
Now we note that for
eµ⌧ !eµ⌧ sin(!⌧ ) 1 2
= = > = ,
⌧ !⌧ !⌧ ⇡/2 ⇡
where we have used the graphical argument of Figure 3.6 for the last inequality. Hence
eµ⌧ 2 ⇡eµ⌧ ⇡
> =) ⌧ < < ,
⌧ ⇡ 2 2
where we have used the fact that for µ < 0, e < 1. Hence the steady state N ⇤ = 1 is
µ⌧

stable only for ⌧ < ⇡2 .



Indeed if we define a ⌧¯ such that !¯
⌧= 2 , from (3.3.4) we have that
µ¯

µ(¯
⌧) = e cos(⇡/2) = 0
and
µ¯

!(¯
⌧) = e sin(⇡/2) = e0 · 1 = 1,

and hence !¯
⌧= 2=) ⌧ = ⇡2 . Finally, note that
⇣ ⌘
@⌧ µ(¯
⌧ ) = @⌧ e µ¯⌧ cos(!¯
⌧ ) = µe µ¯⌧ cos(¯
⌧ ) + !e µ¯

sin(!¯
⌧ ) = 1,
| {z } | {z }
=0 =1

thus at ⌧ = an eigenvalue crosses the imaginary axis with nonzero speed, and a stable
2
solution gives way to an unstable solution. Therefore, a Hopf bifurcation occurs.
Indeed recalling our ansatz for the solution was of the form ⌘(t) = ⌘0 e t at the bifurcation
point we have that
⌘(t) = ⌘0 e(µ+i!)t = ⌘0 cos(t) + i sin(t)
where we have used the fact µ = 0 and ! = 1 at the bifurcation point in the last step. Hence
we expect periodic solutions with period 2⇡ to emerge. As ⌧¯ = ⇡/2 we have that 4¯ ⌧ = 2⇡ and
hence the results of our linear stability analysis agree with the qualitative arguments carried out
earlier in the chapter.

35
y=x

⇡/2 ⇡/2

y = sin(x)

Figure 3.6: Sketch of the curves y = sin(x) and y = x for x 2 [0, ⇡/2] illustrating that the
quotient sin(x)/x for x 2 [ ⇡/2, ⇡/2] is smallest when x = ⇡/2. Not that as the two functions
are odd, the quotient is an even function. It is left as an exercise to show that at the origin the
quotient is one.

36
Chapter 4

Biochemical kinetics

The study of biochemical kinetics concerns the dynamics (i.e., changes in time) of concentra-
tions of substances in biological systems.

4.1 Law of mass action


Suppose that two chemicals, A and B say, react together on collision to produce a product C.
The Law of mass action states that the rate at which the reaction takes place is proportional to
the number of sufficiently energetic collisions between the molecules A and B per unit time,
which in turn is taken to be proportional to the concentrations of A and B.
Thus we write
k
A + B ! C. (4.1.1)
Denoting the concentration of the chemicals A, B and C by [A], [B] and [C] respectively and
applying the law of mass action to (4.1.1) We obtain

d[C]
= k[A][B]
dt
The constant k is known as the rate constant for the reaction.
Many reactions are reversible. In the above example, this would mean that C could disasso-
ciate into its components A and B. This can be denoted as Thus we write
k
A + B )* C. (4.1.2)
k

Applying the Law of mass action to (4.1.2) we obtain the following system of differential equa-
tions for the dynamics of the concentrations (which we denote by a, b anc c for simplicity)
8
>
> da = k c kab
< dt
db
dt = k c kab (4.1.3)
>
>
: dc = kab k c.
dt

37
Note that (4.1.3) implies that
d(a + c) d(b + c)
= 0; = 0,
dt dt
thus the sum of the concentrations a + c and b + c are constant. This reflects that for each
molecule of A or B lost exactly one molecule of C is created.
For the more general reaction
k
A + mB ! nB + pC, (4.1.4)

the Law of mass action yields the differential equations


da
= kabm (4.1.5)
dt
db
= (n m)kabm (4.1.6)
dt
dc
= pkabm . (4.1.7)
dt
The reaction equation (4.1.4) states that for every reaction involving one molecule of A, p mo-
lecules of C are created and there is a net gain of (n m) molecules of B. This is reflected in
the relations
d(pa + c)
=0 (4.1.8)
dt
d(pb (n m)c)
= 0. (4.1.9)
dt
Equation (4.1.8) yields
da dc 1 dc da
+p = 0 =) = . (4.1.10)
dt dt p dt dt
Similarly equation (4.1.9) yields
db dc 1 dc 1 db
p (n m) = 0 =) = . (4.1.11)
dt dt p dt n m dt
From (4.1.5), (4.1.10) and (4.1.11) we obtain
da 1 db 1 dc
kabm = 1 = = . (4.1.12)
|{z} dt n m dt p dt
increase in A per reaction | {z } |{z}
increase in B per reaction increase in C per reaction

In order to understand how the abm term arises in the above, consider for example the reaction
k
A + 2B ! C.

Clearly, we can write the above reaction as


k
A + B + B ! C.

38
The Law of mass action tells us that C is produced at a rate that is proportional to the “three”
reactants A, B and B, i.e.,
dc
= kabb = kab2 .
dt
4.1.1 Remark (Applicability of the Law of mass action). If all collisions are equally probable
and there are large numbers of molecules, the rate of change of concentration should obey the
Law of mass action. Thus the law of mass action is a good approximation to reality provided
chemicals are dilute and well mixed.

4.2 Brusselator
The Brusselator is a theoretical model which is used to show that chemical systems could os-
cillate it is based on a hypothetical reaction rather than a specific chemical mechanism. The
reaction under consideration in the Brusselator model is given by

1 k
A ! X
2 k
B+X ! Y +D
3 k
2X + Y ! 3X
4 k
X ! E.

In this model, A and B are assumed to be maintained at a constant level. D and E are assumed
not to participate in any further reactions and thus their concentrations are irrelevant. Accord-
ingly X and Y are the dependent variables.
From the Law of mass action the corresponding differential equations for the concentrations
of X and Y are (
dx
dt = k1 A k2 Bx + k3 x2 y k4 x
dy
dt = k2 Bx k3 x2 y.
In order to nondimensionalise we introduce the following scalings
r r r
k3 k3 k1 k3 Bk2
u= x, v = y, ⌧ = k4 t, a = A , b= .
k4 k4 k4 k4 k4
Substituting gives
r r r r
k4 du k4 k4 k4 2 k4
k4 = k1 A k2 B u + k3 u v k4 u
k3 d⌧ k3 k3 k3 k3
r
du k1 k3 k2
=) = A Bu + u2 v u
d⌧ k4 k4 k4
du
=) = a bu + u2 v u.
d⌧

39
Similarly,
r r r
k4 dv k4 k4 k4 2
k4 = k2 B u k3 u v
k3 d⌧ k3 k3 k3
dv k2
=) = Bu u2 v
d⌧ k4
dv
=) = bu u2 v.
d⌧
Thus the nondimensional system is given by
(
du
= a bu + u2 v u
d⌧
dv
(4.2.1)
d⌧ = bu u2 v.

The steady states correspond to (u⇤ , v⇤ )T 2 R2 such that


(
a (b + 1)u⇤ + u2⇤ v⇤ = 0
bu⇤ u2⇤ v⇤ = 0.

Thus summing the two equations we obtain


ab b
a (b + 1)u⇤ + bu⇤ = 0 =) u⇤ = a =) v⇤ = 2
= .
a a
⇣ ⌘T
Hence the system possesses a unique steady state (u⇤ , v⇤ )T = a, ab .
To understand the stability of the steady state, we compute the Jacobian which is given by
!
b2uv 1 u2
J (u, v) = .
b 2uv u2

Hence at the steady state


! !
b b + 2b 1 a2 b 1 a2
J ⇤ = J (a, ) = = .
a b 2b a2 b a2

Hence Tr(J ⇤ ) = a2 + b 1 and det(J ⇤ ) = a2 (b 1) + a2 b = a2 > 0. Thus the following


cases are all possibilities

1. Stable node (Tr(J ⇤ ) < 0, Tr(J ⇤ )2 > 4 det(J ⇤ )).

2. Unstable node (Tr(J ⇤ ) > 0, Tr(J ⇤ )2 > 4 det(J ⇤ )).

3. Stable spiral (Tr(J ⇤ ) < 0, Tr(J ⇤ )2 < 4 det(J ⇤ )).

4. Unstable spiral (Tr(J ⇤ ) > 0, Tr(J ⇤ )2 < 4 det(J ⇤ )).

5. Centre (Tr(J ⇤ ) = 0).

40
6. Degenerate steady state (Tr(J ⇤ )2 = 4 det(J ⇤ )).

For the phase plane analysis, we start by noting that


(
du 2 > 0 if v < (b+1)u
u2
a
= (b + 1)u + a + u v
dt < 0 otherwise.
(
dv > 0 if v > ub
= u(b uv)
dt < 0 otherwise.
we compute the nullclines
u nullclines: {v = u(1+b)u2
a
}
v nullclines: {v = ub } and {u = 0}.
On the u-nullcline {v = u(1+b)
u2
a
}:
(
dv 2 u(1 + b) a > 0 if u < a
= bu u = u+a
dt u2 < 0 if u > a.

On the v-nullcline {u = 0}:


du
=a>0
dt
On the v-nullcline {v = ub }:
(
du b 2 > 0 if u < a
=a bu + u u=a u
dt u < 0 if u > a.

The corresponding phase plane plot is shown in Figure 4.1.

4.2.1 Example (Trapping region for the Brusselator). Simular to when we studied Hopf bifurca-
tions before, we can use the Poincaré-Bendixson theorem to identify initial conditions that must
lead to oscillations. All we need to do is to identify a trapping region for our dynamical system.
A trapping region for the Brusselator model (4.2.1) is bounded by, with k > a a parameter,
a
u= (4.2.2)
b+1
u=k (4.2.3)
v=0 (4.2.4)
b(b + 1)
v= (4.2.5)
a
(b + 1)k a
v = u+ + k. (4.2.6)
k2
An example sketch of a trapping region is shown in figure 4.2.

41
v

Figure 4.1: Nullclines with solution flows on the nullclines for the nondimensional Brusselator
system (4.2.1)

b(b+1)
a

a
u
b+1 k

Figure 4.2: Trapping region for the nondimensional Brusselator system (4.2.1)

42
4.3 Enzyme reaction (Michaelis-Menten)
Biochemical reactions are often controlled by enzymes. Enzyme reactions involve a substance
(substrate) that is converted into another substance (product). The enzyme, which is present at
very low concentrations, increases the rate of reaction by lowering the activator energy, (i.e., it
catalyses the reaction).
In the following reaction scheme, the enzyme E coverts the substrate S into the product P
through a two-step process, represented schematically by,
1 k k2
S + E )* C ! P + E,
k 1

where C denotes the substrate/enzyme complex and k1 , k 1 , k2 are rate constants. (The reverse
reaction P + E ! C is often considered to occur so slowly such that it may be neglected).
Define s = [S], e = [E], c = [C] and p = [P ]. The LMA applied to the above reaction
mechanism gives:
8
> ds
>
> dt = k 1 c k1 se
>
< de = k c + k c k se = (k + k )c k se
dt 1 2 1 1 2 1
> dc
>
> dt = k1 se (k 1 + k2 )c
>
: dp = k c,
dt 2

together with the initial conditions e(0) = e0 , s(0) = s0 , c(0) = p(0) = 0. From the equations
we see that
d(e + c)
=0 =) e + c = e0 ,
dt
i.e., the total amount of enzyme (e = amount of free enzyme, c = amount of enzyme bound to
substrate) is conserved. Also we see that
d(s + c + p)
=0 =) s + c + p = s0 ,
dt
i.e., that the total amount of substrate (original form, bound to enzyme and converted product)
is conserved.
Note this implies we can reduce the system of four ODEs to a system of two ODEs
(
ds
dt = k 1 c k1 s(e0 c)
(4.3.1)
dc
dt = k1 s(e0 c) (k 1 + k2 )c,
together with the initial conditions e(0) = e0 , s(0) = s0 , c(0) = p(0) = 0 and with the conser-
vation laws yielding that e = e0 c and p = s0 s + c.
We observe that
d(s + c)
= k2 c
dt
and hence eventually all the substrate becomes product with the substrate and complex con-
centrations tending to zero (consequently also that the enzyme becomes free) and that (4.3.1)
admits the asymptotic solution given by c = s = 0, p = s0 . This is due to the irreversibility of
the conversion of complex to product.

43
4.3.1 Equilibrium approximation
? solved equation (4.3.1) using an equilibrium assumption. They assumed that k 1 k2 ,
therefore used the approximation
k1 se = k 1 c,
which means that an equilibrium is established between E, S and C, the slow step is the break-
down of C to produce P and E.
Since e + c = e0 we find that

k1 s(e0 c) = k 1c
k1 se0
=) c =
k 1 + k1 s
se0
= ,
km + s
k 1
where km = k1 . Hence the product P of the reaction is produced at the rate

dp vmax s
V = = k2 c = ,
dt km + s
where vmax = k2 e0 is the maximum reaction velocity attained when all the enzyme is in com-
plex with the substrate and further increase of the rate can not be achieved by increasing the
concentration of substrate.
dp
v= dt

vmax

1
2 vmax

s
km

Figure 4.3: Product production rate against substrate concentration under the equilibrium as-
sumption.

4.3.2 Quasi-Steady-State approximation


? pointed out that the Michaelis-Menten assumption that an equilibrium exists between E, S
and C is not always justified. They assumed that the initial concentration of substrate greatly
exceeds the initial enzyme concentration, i.e., s0 e0 . We will show (in §4.3.4) that this leads

44
to a separation of timescales such that, initially there is a fast mixing stage (typically lasting mil-
liseconds) after which the product C remains approximately constant until the substrate has been
consumed. Hence c remains approximately at steady state and we may use the approximation

dc
⇡ 0.
dt
The above approximation is called the quasi-steady-state hypothesis. With this approximation,
from (4.3.1) we have
k1 s(e0 c) ⇡ (k 1 + k2 )c, (4.3.2)
thus,
k1 se0 se0
c⇡ = (4.3.3)
k1 s + k 1 + k2 km + s
k 1 +k2
where km = k1 . Substituting (4.3.2) into the equation for s in (4.3.1) gives

ds vmax s
⇡ k2 c = , (4.3.4)
dt km + s
where vmax = k2 e0 .
Note (4.3.3) and (4.3.4) are only an approximation to the full system and they are valid only
when the enzyme concentration is much lower than the substrate concentration.
Inserting the expression (4.3.3) into the equation for the rate of change of p we obtain

dp vmax s
= v = k2 c = , where vmax = k2 e0 .
dt km + s
The graph of production against substrate concentration looks as in Figure 4.3 albeit with a
different value of km .

4.3.3 Comparison of the two approaches

Equilibrium approximation Quasi-steady-state approximation

Assumption k 1 k2 s0 e0 ( dc
dt ⇡ 0)

dp vmax s dp vmax s
Equation dt = km +s dt = km +s
with vmax = k2 e0 with vmax = k2 e0
and km = kk11 and km = k 1k+k
1
2

45
4.3.4 Fast and slow timescales for the quasi-steady-state approximation
The quasi-steady-state approximation is based on the observation that there is a fast and slow
timescale. The rate of the reactions involving C occur on a fast timescale and on a slow timescale
C is almost constant.
To understand this from a mathematical perspective consider the following nondimension-
alisation of (4.3.1). Let s̄ = s/s0 , c̄ = c/e0 , ē = e/e0 , p̄ = p/p0 and ⌧ = k1 e0 t. Substituting in
the scalings into (4.3.1) we obtain the nondimensional system
(
ds̄ 0
d⌧ = Km c̄ s̄(1 c̄)
(4.3.5)
dc̄
" d⌧ = s̄(1 c̄) Km c̄,

where " = e0 /s0 ⌧ 1, Km 0 = k /(s k ), and K


1 0 1 m = (k 1 + k2 )/(k1 s0 ) together with the
initial conditions s̄(0) = 1 and c̄(0) = 0.
From the equations (4.3.5) the reaction rate of c̄ is much faster than that of s̄ (since
dc̄ 1
d⌧ = " (s̄(1 c̄) Km c̄) is large as " ⌧ 1.) Since " ⌧ 1 we may assume that after a fast
dc̄
initial transient, " d⌧ ⇡ 0, this is an alternative formulation of the quasi-steady-state hypothesis
and it leads to exactly the same calculation for the rate equation of the product as was carried
out above.

46
Chapter 5

Single species, discrete time models

We aim to understand how the rate of growth or decay of an isolated population is determined.
Let us assume that the size Nn of a population at time n completely determines its size at time
n + 1. Thus we have
Nn+1 = f (Nn ),
which is a first order difference equation or recurrence relation. The use of discrete time may
be appropriate if the population is measured at discrete points, so that data for births and deaths
are only available at discrete times. Although we will not pursue this fact further in these notes,
difference equations also naturally arise when one considers discretisations of ODEs. Hence
understanding the behaviour of solutions to difference equations is also relevant if one wishes
to understand the behaviour of numerical solutions when one approximates the solution to an
ODE.

5.0.1 Example (Malthusian equation in discrete time). Consider a continuously breeding popu-
lation measured at discrete points. Let

• b denote per capita production or reproduction in between measurements, i.e., b 0 is


the average number of births to any given individual in between measurements of the
population.

and let

• d denote the per capita mortality between measurements, i.e., d 2 [0, 1] is the probability
of any given individual dying between measurements of the population.

Thus in between the censuses at times n and n + 1, the total number of births is given by bNn
and the total number of deaths is given by dNn . Therefore the population size at time n + 1 is
given by

Nn+1 = Nn + bNn dNn


= (1 + b d)Nn (5.0.1)
= Nn ,

47
where the last equality defines 0, which is called the (net) growth ratio. The linear difference
equation (5.0.1) is known as the Malthusian equation in discrete time. The solution of (5.0.1)
with initial population size N0 is

N1 = N0
2
N2 = N1 = N0
..
.
n
Nn = N0 .

(Number of Individuals)
N
N2 N N

N1
N0 N0 N0
N1
N2
... n t ... n t ... n t
1 2 (time) 1 2 1 2
>1 =1 <1
(b > d) (b = d) (b < d)

Figure 5.1: Evolution of the population size under the discrete Malthusian equation for different
net growth rates.

5.0.2 Example (Insect population dynamics). Consider a population of insects that emerge from
their eggs in spring, lay their eggs in the summer and die in the autumn. The number of insects in
the (n + 1)’th generation therefore depends only on the number of insects in the n’th generation
(as the insects from older generations have all died).
Let be the average number of eggs laid by insects in the n’th generation, the number of
insects in the (n + 1)’th generation is given by

Nn+1 = Nn ,

i.e., a discrete time Malthusian equations with net growth ratio .


In order to obtain a more realistic model we take into account the fact that not all the off-
spring produced by each insect survive to be counted as insects in the next generation. Assume
that only a fraction s(N ) where s(N ) 2 [0, 1] survive. The dependence of s on the population
size could arise for example if the newly hatched insects compete for the same pool of limited
resources.

48
The number of insects in the (n + 1)’th generation is now given by

Nn+1 = s(Nn )Nn . (5.0.2)

Alternatively we may write

Nn+1 = Nn F (Nn ) or Nn+1 = f (Nn ),

where we have introduced the functions F () and f () which we call the per capita production
and the production respectively.
A model is called density-dependent if the per capita production F depends on N . Note
that for the Malthusian equation (5.0.1), F (N ) = , a constant, hence the model is not density-
dependent whilst for the insect model (5.0.2,) F (N ) = S(Nn ), and hence the model is density-
dependent.

5.1 Cobweb maps


Consider the first order difference equation or recurrence relation

Nn+1 = f (Nn ), (5.1.1)

the evolution of models such as (5.1.1) may be investigated graphically using cobweb maps.
One essentially plots iterates Nn+1 against Nn for n = 0, 1, 2, . . . ,. The following list details
the steps in constructing a cobweb map of the model (5.1.1) with initial data N0 .
1. Plot the curve y = f (x) and the line y = x.
2. Mark N0 on the x axis.
3. Draw the vertical line connecting (N0 , 0) and the curve (the point on the curve has co-
ordinates (N0 , N1 )).
4. Draw the horizontal line connecting the computed point (N0 , N1 ) with the line y = x (the
point on the line y = x has coordinates (N1 , N1 ).
5. Draw the vertical connecting the curve and the point (N1 , N1 ) (the point on the curve has
coordinates (N1 , N2 ).
6. Iterate the procedure.
The result is that we obtain the sequence of points N0 , N1 , N2 , . . . that correspond to the solution
of (5.1.1). Figure 5.2 shows a sketch of a cobweb map.
5.1.1 Definition (Steady-state solution). A solution N⇤ to a difference equation of the form
(5.1.1) is called a steady-state solution, equilibrium or fixed point if the solution to the model
(5.1.1) with initial data N⇤ satisfies Nn = N⇤ for all n.
From (5.1.1) we see that this is equivalent to requiring

N⇤ = f (N⇤ ).

49
y

N5
N4

N3

N2

x
N1 N2 N3 N4

Figure 5.2: Example sketch of a cobweb map.

Steady-states can be found in a cobweb map as the intersection of the line y = x with the
curve y = f (x).

5.1.2 Definition (Periodic solution). A solution to a difference equation Nn+1 = f (Nn ) is


called a periodic solution with period p 2 Z+ if

Nn+p = Nn 8n

and
Nn+q 6= Nn for any n and any q < p.

5.1.3 Definition (Periodic point). A point N0 is called a periodic point with period p 2 Z+ if

N0 = f p (N0 ).

5.1.4 Definition (Periodic orbit). The sequence of points

O(N0 ) = {N0 , f (N0 ), f 2 (N0 ), . . . , f p 1


(N0 )},

where f p (N0 ) = N0 , is called the periodic orbit or p-cycle of the periodic point N0 .

5.1.5 Example (Cobweb maps and steady states for the Malthusian equations in discrete time).
Recall the Malthusian equation in discrete time (5.0.1),

Nn+1 = f (Nn ) = Nn , 0.

In Figure 5.1 we observed that the population may grow, remain constant or decay from the
initial value depending on the value of . We may use cobweb maps as an alternative means

50
y y y
N3
N2
N1
N1
N1
N2
N3
x x x
N0N1N2 N0 N2N1 N0
>1 =1 0< <1
(Explosion) (Steady-state) (Extinction)

Figure 5.3: Cobweb maps for the Malthusian for different values of the net growth rate .

to verify this fact. Figure 5.3 shows the cobweb maps corresponding to > 1, = 1 and
0 < < 1.
A steady state solution N⇤ satisfies
N⇤ = f (N⇤ ),
hence, from (5.0.1) we have
N⇤ = N⇤ ,
and hence
N⇤ (1 ) = 0.
Thus if = 1 any N⇤ 0 is a steady state whilst if 6= 1 the only steady state is N⇤ = 0.
5.1.6 Example (Beverton-Holt Model). Consider the density dependent model
Nn
Nn+1 = f (Nn ) = , (5.1.2)
K + Nn
The production function f in the model (5.1.2) is shown in Figure 5.4. For small N , f appears
to grow linearly and f saturates as N becomes large asymptotically approaching the value .
The steady states are given by
N⇤
N⇤ = f (N⇤ ) = .
K + N⇤
Thus they satisfy the quadratic
N⇤2 + (K )N⇤ = 0
The steady states therefore correspond to N⇤ = 0 and N⇤ = ( K). We only consider positive
solutions and hence if < K there is a single unique steady state N⇤ = 0. However when
> K there exists a second (positive) steady state corresponding to N⇤ = K > 0. Figure
5.5 shows cobweb maps for the Beverton-Holt model (5.1.2) for < K and for > K, we
notice the existence of two positive steady states for > K.

51
f

Figure 5.4: Production function f for the Beverton-Holt model (5.1.2).

5.1.7 Example (Modified Beverton-Holt model). Consider the following density-dependent dif-
ference equation
Nn
Nn+1 = f (Nn ) = . (5.1.3)
1 + Nn2
The steady states are given by
N⇤
N⇤ = .
1 + N⇤2
Thus they satisfy the cubic equation

N⇤3 + (1 )N⇤ = 0.

Hence there is a steady state corresponding to N⇤ = 0 and if > 1 we have another steady state
p
corresponding to N⇤ = 1.
N
Let f (N ) = 1+N 2 , then by the quotient rule,

(1 + N 2 ) N · 2N (1 N 2 )
f 0 (N ) = = .
(1 + N 2 )2 (1 + N 2 )2

Thus for small N , f grows approximately linearly (f 0 ⇡ ) whilst as N gets larger the growth
rate of f falls until the critical point N = 1 (where the derivative of f changes sign) beyond
which f begins to decrease with f ! 0 as N ! 1.
Figure 5.6 shows cobweb maps for the model (5.1.3) for the cases < 1 and > 1,
illustrating the existence of a single unique steady state in the former case and two positive
steady states in the latter.

5.1.8 Note. Information on the slope of f is needed to determine the precise behaviour of solu-
tions close to a steady state value.

5.2 Steady states and local stability


5.2.1 Definition (Stability of steady states). A steady state N⇤ is called

52
y

N1
N2
N3
x
N2 N1 N0
<K

y y

N1
N
N23

N3
N2
N1

x x
N0N1 N2 K N2N1 N0
K
>K >K

N0 < K N0 > K

Figure 5.5: Cobweb maps for the Beverton-Holt model (5.1.2).

53
y

N1
N2
N3

x
N2N1 N0
If < 1 there is a unique steady state N⇤ = 0 and all initial states converge to 0.
y

N1
N3
N2

x
N0 N2 N1
p
If > 1 there are two steady states N⇤ = 0 and N⇤ = 1.

Figure 5.6: Cobweb maps for the model (5.1.3) of Example 5.1.6.
54
• Stable if for all " > 0 there exists a > 0 such that

|N0 N⇤ | < implies |Nn N⇤ | < " for all n.

• asymptotically stable if it is stable and |Nn N⇤ | ! 0 as n ! 1.

If the steady state N⇤ is not stable it is called unstable.

In order to understand to stability of a steady state we will linearise the difference equa-
tion and investigate the stability of the steady state for the linearised equation. This technique
is known as linear stability analysis. A consequence of this approach is that our results are
informative only regarding the behaviour of solutions that are “close” to the steady state value.

Linearisation
Consider a steady state N⇤ of the difference equation Nn+1 = f (Nn ). We are interested in
the behaviour of solutions near N⇤ . To investigate this we introduce the difference between the
solution and the steady state value ⌘n , i.e.,

⌘n := Nn N⇤ with |⌘n | ⌧ 1.

It is easy to see ⌘ satisfies a first order difference equation as

⌘n+1 = Nn+1 N⇤
= f (Nn ) N⇤
= f (N⇤ + ⌘n ) N⇤ .

where we have used the fact that Nn+1 = f (Nn ) for all n to obtain the last two equalities.
Assume the function f is differentiable in a neighbourhood of the point N⇤ . Then,

f (N⇤ + h) f (N⇤ )
lim = f 0 (N⇤ ) and f (N⇤ + h) = f (N⇤ ) + hf 0 (N⇤ ) + O(h2 ).
h!0 h
Thus,

⌘n+1 = f (N⇤ + ⌘n ) N⇤
= f (N⇤ ) + ⌘n f 0 (N⇤ ) + O(⌘n2 ) f (N⇤ )
0
= ⌘n f (N⇤ ) + O(⌘n2 )
0
⇡ ⌘n f (N⇤ ),

as |⌘n | ⌧ 1, i.e., we are interested in the behaviour of solutions close to the steady state value
N⇤ , and hence O(⌘n2 ) is small.
Thus we obtain a linear approximation to the nonlinear difference equation for ⌘,

⌘˜n+1 = ⌘˜n f 0 (N⇤ ).

55
The solution to this equation is given by

⌘˜n = (f 0 (N⇤ ))n ⌘˜0 ,

and hence ⌘˜n ! 0 (the difference to the steady state goes to zero) as n ! 1 if f 0 (N⇤ ) <
1. We therefore say, a steady state N⇤ of Nn+1 = f (Nn ) is locally asymptotically stable if
f 0 (N⇤ ) < 1 and is unstable if f 0 (N⇤ ) > 1.
Let = f 0 (N⇤ ). There are 5 distinct types of stability behaviour dependent on

1. Oscillatory unstable: < 1

y N

N2
N0
N⇤
N1

N3 x t
N⇤
N1 N0 N2 (time)

Figure 5.7: Cobweb maps and solution behaviour for the oscillatory unstable case of §5.2 cor-
responding to < 1.

2. Oscillatory asymptotically stable: 1< <0

y N
N0

N2 N⇤
N3
N1

x t
N⇤
N1 N2 N0 (time)

Figure 5.8: Cobweb maps and solution behaviour for the oscillatory asymptotically stable case
of §5.2 corresponding to 1 < < 0.

56
y N
N0

N1
N2 N⇤
N3
N̂3
N̂2
N̂0
N̂1

x t
N⇤
N̂0 N̂1N̂2 N2N1 N0 (time)

Figure 5.9: Cobweb maps and solution behaviour for the monotonically asymptotically stable
case of §5.2 corresponding to 0 < < 1.

3. Monotonically asymptotically stable: 0 < <1

4. Monotonically unstable: 1 <

y
N3
N
N2
N1

N⇤

N̂1
N̂2

N̂3 x t
N⇤
N̂2 N̂1N̂0 NN
0 1 N2 (time)

Figure 5.10: Cobweb maps and solution behaviour for the monotonically unstable case of §5.2
corresponding to > 1.

5. Stable but not asymptotically stable: | | = 1

Figures 5.7—5.11 show cobweb diagrams and solution behaviour versus time for the five differ-
ent cases.

57
y N

N1

N⇤

x t
N⇤ N0 = N1 = N2 = . . . (time)
y N

N2

N⇤

N1

x t
N1 = N3 = N5 = . . . N⇤ N0 = N2 = N4 = . . . (time)

Figure 5.11: Cobweb maps and solution behaviour for the stable but not asymptotically stable
case of §5.2 corresponding to | | = 1, with = 1 (top row) and = 1 (bottom row).

58
5.2.2 Corollary. A periodic point N0 of period p is stable if:

|f 0 (N0 )f 0 (N1 ) . . . f 0 (Np 1 )| < 1.

(Apply the stability condition |f 0 (N0 )| < 1 to f p (N0 ) and use the chain rule).

5.3 Bifurcations
We now investigate how changes in parameters influence the asymptotic behaviour of solutions.
In particular we will be interested in identifying bifurcations, i.e., points where small changes in
parameters lead to qualitative changes in the asymptotic behaviour of solutions.
5.3.1 Example (Beverton-Holt model revisited). Consider the Beverton-Holt model (5.1.2),
Nn
Nn+1 = f (Nn ) = , , K > 0. (5.3.1)
K + Nn
Clearly the net production rate F (Nn ) = K+Nn tends to zero as Nn becomes large.
As in Example 5.1.6 the model has a steady state
p corresponding to N⇤ = 0 and if > K we
have another steady state corresponding to N⇤ = K. In order to investigate the stability
of the steady states we compute the derivative of the production function f which yields
(K + N ) N K
f 0 (N ) = =
(K + N )2 (K + N )2
We split the stability analysis into two cases;
Case 1 0 < K
Stability of the steady state N⇤ = 0:

f 0 (0) = ,
K
so the steady state is monotonically asymptotically stable for 0 < < K and stable but
not asymptotically stable for = K.

Case 2 >K
Stability of the steady state N⇤ = 0:

f 0 (0) = > 1,
K
so the steady state N⇤ = 0 is unstable.
Stability of the steady state N⇤ = K:
K K
f 0( K) = = < 1,
(K + K)2
so the steady state is monotonically asymptotically stable.

59
N⇤

N⇤ = K (stable)

N⇤ = 0 (stable) (unstable)
K

Figure 5.12: Bifurcation diagram for the model (5.3.1) with bifurcation parameter .

We may represent the existence and stability of steady states for (5.3.1) as a function of in
the form of a bifurcation diagram as shown in Figure 5.12, here the parameter is called the
bifurcation parameter.

5.3.2 Example (Harvesting). Consider the difference equation


Nn
Nn+1 = f (Nn ) = HNn , > H > 0. (5.3.2)
1 + Nn2
The model (5.3.2) describes growth of a population with a harvesting term HNn , the parameter
H describes the effort put into harvesting. We start by determining the steady states of the model.
Recall that a steady state satisfies

N⇤ = f (N⇤ )
N⇤
= HN⇤ .
1 + N⇤2
Therefore, ✓ ◆
N⇤ 1 +H = 0,
1 + N⇤2
and we conclude that the steady states correspond to either N⇤ = 0 or

= 1 + N⇤2 ,
1+H
which yields r
N⇤ = 1, if > 1 + H,
1+H

60
q
1 H
i.e., if > 1 + H we have a second steady state N⇤ = 1+H .
As before in order to investigate the stability of the steady states we compute the derivative
of the production function f which is given by

(1 + N 2 ) 2 N 2 (1 N 2 )
f 0 (N ) = H= H.
(1 + N 2 )2 (1 + N 2 )2

We once again split the stability analysis into two cases;

Case 1 H < H +1
Stability of the steady state N⇤ = 0:

f 0 (0) = H.

Thus the steady state is monotonically asymptotically stable for H < < H+! and stable
but not asymptotically stable for = H + 1.

Case 2 >H +1
Stability of the steady state N⇤ = 0:

f 0 (0) = H>1

so the steady state N⇤ = 0 is unstable.


q
1 H
Stability of the steady state N⇤ = 1+H :

r !
0 1 H 1 ( 1 H) / (1 + H)
f = 2 H (5.3.3)
1+H 1+( 1 H) / (1 + H)
( + 2 + 2H) / (1 + H)
= 2 H (5.3.4)
/ (1 + H)
( + 2 + 2H) / (1 + H)
= 2 H (5.3.5)
1/ (1 + H)
(1 + H) ( + 2 + 2H) H
= (5.3.6)

2 (1 + H)2 (1 + H) H
= . (5.3.7)

✓q ◆
1 H
Note f0 1+H is a decreasing function of for > 0 (checking this fact is left as an
exercise).

61
In order to ascertain the stability of this steady state as function of we proceed with the
following calculations
r !
0 1 H 2 (1 + H)2 (1 + 2H)
f = 1 () =1
1+H
() 2(1 + H)2 = (2 + 2H)
() = 1 + H,

and
r !
1 H 2 (1 + H)2 (1 + 2H)
f0 = 1 () = 1
1+H
() 2(1 + H)2 = 2 H
(1 + H)2
() = .
H
✓q ◆
1 H
Thus as f0 1+H is a decreasing function of we identify the following three cases.
q
1 H
Setting N⇤ = 1+H we have,
2
1. If 1+H < < (1+H) H then 1 < f 0 (N⇤ ) < 1 so the steady state is asymptotically stable.
(It is left as an exercise to check the steady state is monotonically asymptotically stable if
2 2(1+H)2 2
1 + H < < 2(1+H) 1+2H and oscillatory asymptotically stable if 1+2H < < (1+H)
H )
(1+H)2
2. If = H then f 0 (N⇤ ) = 1 and N⇤ is stable but not asymptotically stable.
(1+H)2
3. If > H then f 0 (N⇤ ) <
1 and N⇤ is unstable.
✓q ◆
0 1 H
Figure 5.13 shows a sketch of the function f 1+H for fixed H, varying .
As in Example 5.3.1, in Figure 5.14 we plot a bifurcation diagram for the model (5.3.2) that
depicts the stability of steady states as the bifurcation parameter varies.

5.3.3 Example. Harvesting model: periodic steady states. We recall the harvesting model
(5.3.2) given by,
Nn
Nn+1 = f (Nn ) = HNn , > H > 0.
1 + Nn2
We now wish to understand whether there exist periodic solutions when > (1+H)2 /H (recall
that from the analysis of Example 5.3.2 this implies the model has no stable steady states).
We focus on periodic solutions of period 2 (c.f., Definition 5.1.2). Such solutions satisfy

N1 = f (N2 ) and N2 = f (N1 ).

62
f0

1
=1+H = 2(1+H)2
(1+2H)

(1+H)2
= (H)
1
✓q ◆
1 H
Figure 5.13: Sketch of f0 1+H versus .

N⇤

(unstable)
q
1 H
N⇤ = 1+H (stable)

N⇤ = 0 (stable) (unstable)
H 1+H (H + 1)2 /H

Figure 5.14: Bifurcation diagram for the model (5.3.2) with bifurcation parameter .

63
Therefore, N1 = f (f (N1 )) = f 2 (N1 ) and N2 = f (f (N2 )) where we have introduced the
notation f 2 (N ) = f (f (N ). Hence N1 and N2 are steady states of the map f 2 , recalling that
M⇤ is a steady state of a map g if M⇤ = g(M⇤ ), we see that N1 and N2 satisfy

Ni = f 2 (NI ) (5.3.8)
⇣ ⌘
= f Ni /(1 + Ni2 ) HNi (5.3.9)
0 1
Ni /(1 + Ni2 ) HNi ⇣ ⌘
= @ 2
A H Ni /(1 + Ni2 ) HNi , (5.3.10)
1 + Ni /(1 + Ni2 ) HNi

for i = 1, 2. Note that if N⇤ is a steady state of f then it is also a steady state of f 2 as f 2 (N⇤ ) =
f (f (N⇤ )) = f (N⇤ ) = N⇤ . Thus the two steady states solve (5.3.8), any other solutions are
periodic steady states of (5.3.2) with period 2.

Figure 5.15: Asymptotic solution behaviour for the model (5.3.2) with control parameter .

Figure 5.15 shows the long time (large n) behaviour of solutions to the harvesting model
(5.3.2) as is increased. At = 4.5 = (1 + H)2 /H, the non-zero steady state loses stability
and simultaneously a period 2 solution emerges as increases further this solution becomes

64
unstable and progressively larger period (2k ) solutions emerge. This is referred to as period-
doubling or flip bifurcation. For even larger values of the solutions appear to exhibit chaotic
behaviour, with the appearance of a period-3 solution crucial (the analysis of chaotic behaviour
is beyond the scope of the current course).

5.3.4 Problem. For the discrete logistic equation:

xn+1 = µxn (1 xn ), x0 2 [0, 1], µ > 0,

(i) Show that the trivial steady state is asymptotically stable for 0 < µ < 1.

(ii) Show that the non-trivial steady state is asymptotically stable for 1 < µ < 3.

(iii) Using the above information and Corollary


p 5.2.2, show that there is a 2-cycle which is
asymptotically stable for 3 < µ < 1 + 6.

65
Chapter 6

Interacting species, discrete time


models

We now consider how interactions between small numbers of species effect the population dy-
namics of those species in discrete time models.

6.1 Systems of linear difference equations


Consider the system of m linear difference equations

z n+1 = M z n (6.1.1)

where z 2 Rm , M 2 Rm⇥m is an m ⇥ m matrix with m 2 N and the initial data z 0 is given.


(If m = 1 we are in the case of a scalar linear difference equation and zn = n z0 ).
We seek a solution to (6.1.1) of the form z n = n c where 2 R and c 2 Rm are to be
found. Substituting our ansatz into (6.1.1) we obtain
n+1 n
c= Mc
n
=) ( c M c) = 0.

Thus, = 0 or M c = c. If = 0 this gives the trivial solution z n = 0.


Consider the case M c = c, i.e.,

(M I) c = 0, (6.1.2)

where I is the m ⇥ m identity matrix.


For any , (6.1.2) has a solution given by c = 0 ( =) z n = 0). However if is an
eigenvalue of M then we have a non-trivial solution of (6.1.2). For this to happen the matrix
M I, must be singular, i.e.,

det (M I) = 0. (6.1.3)

66
The equation (6.1.3) is a polynomial of degree m in that has m (possibly complex) roots.
Assume that all m roots are real and distinct. The general solution of (6.1.1) is
m
X
n
zn = Ai i ci ,
i=1

where i is the eigenvalue corresponding to the eigenvector ci . The Ai are arbitrary constants
that are determined by the initial data z 0 .
Note that
1. If | i | < 1 for all i = 1, 2, . . . , m then |z n | ! 0 as n ! 1.
2. If | i |  1 for all i = 1, 2, . . . , m with | i | = 1 for some i then z n tends to a constant
(or oscillatory vector) as n ! 1.
3. If | i | > 1 for any i 2 {1, 2, . . . , m} then |z n | ! 1 as n ! 1.
6.1.1 Example (Circulatory system). In the circulatory system, red blood cells (RBCs) are con-
tinuously being destroyed and replaced. Assume that the spleen filters out and destroys a certain
fraction of the cells every day and that the bone marrow produces a number proportional to the
number lost on the previous day. What is the RBC count on the n th day?
Let

Rn : number of RBCs in circulation on day n.


Mn : number of RBCs produced by the marrow on day n.
⇢ : fraction of RBCs removed by the spleen.

It follows that the equations for Rn and Mn are


(
Rn+1 = Rn ⇢Rn + Mn = (1 ⇢)Rn + Mn
(6.1.4)
Mn+1 = ⇢Rn .
Let !
1 ⇢ 1
A= ,
⇢ 0
then
!
1 ⇢ 1
det(A I) = det

= (1 ⇢ )( ) ⇢
2
= + (⇢ 1) ⇢.

For nontrivial solutions we require


2
+ (⇢ 1) ⇢=0
( + ⇢)( 1) = 0,

67
thus 1 = ⇢ and 2 = 1.
For the eigenvalue 1 = ⇢, the corresponding eigenvector c1 is given by
! ! !
1 ⇢+⇢ 1 1 1 1
c1 = 0 =) c1 = 0 =) c1 =
⇢ ⇢ ⇢ ⇢ 1

For the eigenvalue 2= 1, the corresponding eigenvector c2 is given by


! ! !
1 ⇢ 1 1 ⇢ 1 1
c2 = 0 =) c2 = 0 =) c2 =
⇢ 1 ⇢ 1 ⇢

The general solution to (6.1.4) is given by


! ! !
Rn 1 1
= A1 n1 c1 + A2 n2 c2 = ( ⇢)n A1 + A2 .
Mn 1 ⇢

Thus, (
Rn = ( ⇢)n A1 + A2
(6.1.5)
Mn = A1 ( ⇢)n + A2 ⇢
! !
Rn A2
As |⇢| < 1, ! as n ! 1. Hence the RBC levels are maintained. (Note
Mn A2 ⇢
we had | 1 | < 1 and 2 = 1, implying that the solution approached a constant asymptotically.)

Note: By considering the number of RBCs in circulation on day n+2, we are able to combine
the system of two difference equations into one second order difference equation. From the first
equation of (6.1.4) we have

Rn+2 = (1 ⇢)Rn+1 + Mn+1 , (6.1.6)

from the second equation of (6.1.4) we have

Mn+1 = ⇢Rn ,

substituting this into (6.1.6) gives

Rn+2 = (1 ⇢)Rn+1 + ⇢Rn . (6.1.7)

The equation (6.1.7) is a linear second order difference equation. To solve (6.1.7) we assume
the solution is of the form
Rn = n A
where is to be found and A is given by the problem data. Applying our ansatz in (6.1.7) we
have
n+2 n+1 n
A = (1 ⇢) A+⇢ A.

68
Which implies ⇣ ⌘
n 2
A + (⇢ 1) + ⇢ = 0.

We are interested in nontrivial solutions A 6= 0 hence we look for roots of the polynomial
2
+ (⇢ 1) + ⇢ = 0. (6.1.8)

The polynomial (6.1.8) is sometimes called the characteristic equation of (6.1.7). It may be
factorised to give
( + ⇢)( 1) = 0 =) 1 = ⇢ and 2 = 1.
Note the roots are the eigenvalues of the matrix A of the system of equations 6.1.4. In the case
that we have distinct real roots, as is the case here, the general solution is given by
n n
R n = A1 1 + A2 2 =) Rn = A1 ( ⇢)n + A2 ,

where the constants A1 and A2 are determined by the initial data.


Consider a system of two linear difference equations of the form

z n+1 = M z n ,
!

where z n 2 R2 and M = is a 2 ⇥ 2 matrix with ↵, , , 2 R. The eigenvalues of

M are found by solving the equation:


!

det(M I) = det = 0.

Thus,
2
(↵ + ) + (↵ )=0
2
=) + a + b = 0,

with a = (↵ + ) = Tr(M ) 2 R and b = (↵ ) = det(M ) 2 R.


The necessary and sufficient conditions for | i | < 1 for i = 1, 2 are given by the so called
Jury conditions.

6.1.2 Lemma (Jury condtion). The roots of the polynomial of degree two, p( ) = 2 +a +b = 0
where a, b 2 R will satisfy | i | < 1 for i = 1, 2 if and only if

1. b < 1,

2. 1 + a + b > 0,

3. 1 a + b > 0.

69
Proof . The roots of p( ) are given by
p
a± a2 4b
1,2 = .
2
We will analyse the case of negative and nonegative discriminant separately.

1. a2 4b < 0, in this case the roots are a complex conjugate pair and since b is equal to the
product of the roots, we find that
2 2
b= 1 2 =| 1| =| 2| ,

as the roots are complex conjugate. Hence | i | < 1 for i = 1, 2 if and only if b < 1. We
now show that the last two inequalities of Lemma 6.1.2 are satisfied. We have that
2
a2 4b = |a| 2 4 1+b |a| ,

therefore,

a2 4b < 0
2
=) |a| 2 4 1+b |a| < 0 =) 1 + b |a| >0
=) 1 + b a > 0 and 1 + b + a > 0.

2. When a2 4b 0 (real roots), the largest magnitude R of the roots is given by


p
|a| + a2 4b
R = max{| 1 | , | 2 |} = .
2
This is an increasing function of |a|, R = 1 implies
p
|a| + a2 4b
=1
2 p
=) |a| 2 = a2 4b
=) a2 4 |a| + 4 = a2 4b
=) |a| = b + 1.

Hence 0  R < 1 if and only if |a| < 1 + b. This implies the latter two inequalities of
Lemma 6.1.2. To deduce the remaining inequality, i.e., b < 1, note that R < 1 implies
| i | < 1, i = 1, 2 which implies | 1 2 | < 1, thus
p p
a+ a2 4b a a2 4b a2 a2 4b
| 1 2| = = = |b| < 1.
2 2 4

Thus the first inequality of Lemma 6.1.2 must hold.

70
The inequalities in the Jury conditions are strict, in the case of equality we have the following
results

1. If 1 + a + b = 0 (Tr(M ) = 1 + det(M )) there is an eigenvalue = 1.

2. If 1 a + b = 0 ( Tr(M ) = 1 + det(M )) there is an eigenvalue = 1.

3. If 1 + a + b > 0 and 1 a + b > 0 and b = 1 (1 + det(M ) > Tr(M ) and det(M ) = 1)


there is a pair of complex conjugate eigenvalues on the unit circle.

6.1.3 Example (Circulation revisited). Recall that for the circulation example, the characteristic
polynomial used to determine the eigenvalues was
2
+ (⇢ 1) ⇢ = 0.

Hence a = ⇢ 1 and b = ⇢. Thus 1 + a + b = 1 + ⇢ 1 ⇢ = 0. Therefore one of the


eigenvalues is 1.

6.2 Systems of nonlinear difference equations


We consider systems of two nonlinear difference equations of the form
(
Nn+1 = f (Nn , Pn )
(6.2.1)
Pn+1 = g(Nn , Pn ),

although the results may be extended to systems of m nonlinear difference equations with m 2
N.

6.2.1 Definition (Steady states). Steady states of (6.2.1) are constant vectors (N⇤ , P⇤ )T such
that (
N⇤ = f (N⇤ , P⇤ )
(6.2.2)
P⇤ = g(N⇤ , P⇤ ).

6.2.2 Definition (Local and asymptotic stability). A steady state (N⇤ , P⇤ )T of (6.2.1) is locally
stable if given any ✏ > 0 there exists a > 0 such that

(N0 , P0 )T (N⇤ , P⇤ )T < =) (Nn , Pn )T (N⇤ , P⇤ )T < ✏ for all n = 1, 2, . . .

If a steady state (N⇤ , P⇤ )T is stable and (Nn , Pn )T (N⇤ , P⇤ )T ! 0 as n ! 1 then


(N⇤ , P⇤ is called locally asymptotically stable.
)T
If a steady state (N⇤ , P⇤ )T is not stable it is called unstable.

71
Linearisation of systems of difference equations
Let (N⇤ , P⇤ )T be a steady state of (6.2.1). We wish to investigate the behaviour of solutions
close to the steady state value. Let Nn = N⇤ + ⌘n and Pn = P⇤ + n where (⌘n , n ) ⌧ 1.
Substituting into (6.2.1) gives
(
Nn+1 = N⇤ + ⌘n+1 = f (Nn , Pn ) = f (N⇤ + ⌘n , P⇤ + n )
(6.2.3)
Pn+1 = P⇤ + n+1 = g(Nn , Pn ) = g(N⇤ + ⌘n , P⇤ + n ).

We assume that the functions f and g are smooth. By Taylor expansion we have
(
@ @
f (N⇤ + ⌘n , P⇤ + n ) = f (N⇤ , P⇤ ) + @N f (N⇤ , P⇤ )⌘n + @P f (N⇤ , P⇤ ) n + O(⌘n2 , n2 )
@ @
g(N⇤ + ⌘n , P⇤ + n ) = g(N⇤ , P⇤ ) + @N g(N⇤ , P⇤ )⌘n + @P g(N⇤ , P⇤ ) n + O(⌘n2 , n2 ).

Hence neglecting the higher order terms and using the fact that N⇤ = f (N⇤ , P⇤ ) and P⇤ =
⌘n+1 , ˜n+1 )T ⇡ (⌘n+1 , n+1 )T
g(N⇤ , p⇤ ) as (N⇤ , P⇤ )T is a steady state, from (6.2.3) we have that (˜
satisfy (
⌘˜n+1 = @N @
f (N⇤ , P⇤ )˜ @
⌘n + @P f (N⇤ , P⇤ ) ˜n
(6.2.4)
˜n+1 = @
g(N⇤ , P⇤ )˜⌘n + g(N⇤ , P⇤ ) ˜n .
@
@N @P

⌘n+1 , ˜n+1 )T , we can write the system (6.2.4) as


Let z n = (˜

z n+1 = J z n (6.2.5)
!
@ @
where the matrix J = @N f (N⇤ , P⇤ ) @P f (N⇤ , P⇤ ) is called the Jacobian and it is eval-
@ @
@N g(N⇤ , P⇤ ) @P g(N⇤ , P⇤ )
uated at the steady state (N⇤ , P⇤ )T .
Recall that the general solution of (6.2.5) is of the form
2
X
n
zn = Ai i ci ,
i=1

where 1,2 are the eigenvalues of the matrix J with corresponding eigenvectors c1,2 .
If | i | < 1 for i = 1, 2, then |z n | ! 0 as n ! 1. So the perturbations from the steady state
value decay asymptotically and the solution tends to the steady state (N⇤ , P⇤ )T as n ! 1.
If | i | > 1 for i = 1 or i = 2, then |z n | ! 1 as n ! 1. So the perturbations from the
steady state value grow and the solution moves away from the steady state (N⇤ , P⇤ )T .

6.2.3 Definition (Local stability). A steady state (N⇤ , P⇤ )T of (6.2.1) is said to be

1. locally asymptotically stable if | i | < 1 for i = 1, 2.

2. unstable if | i | > 1 for i = 1 or i = 2.

3. stable if | i |  1 for i = 1, 2.

72
6.3 Models for interacting species
Interactions among species can be classified into three main types

1. Competition: Each species has an inhibitory effect on the other.

2. Mutualism: Each species has a beneficial effect on the other.

3. Predation or parasitism: One species the predator or parasite has an inhibitory effect on
the other. The other species, the prey or host has a beneficial effect of the predator or
parasite.

Host-Parasitoid interactions
Parasitoids are creatures that have a free living and a parasitic life stage, e.g., some wasps and
flies. The free living adults lay eggs in a host (e.g., caterpillar), the eggs hatch and the young
develop eating the host and killing it. We shall derive a general model for such interactions.
Let, Hn : number of hosts at time n, Pn : number of parasitoids at time n, R0 0: per capita
reproductive ratio of the host in the absence of parasitism, c 0 the average number of eggs
laid by each parasitoid and f (H, P ) 2 [0, 1]: the fraction of hosts not parasitised. The number
of hosts not parasitised at time n is therefore given by

Hn f (Hn , Pn ),

and the number of parasitised hosts at time n is given by

Hn (1 f (Hn , Pn )).

Thus the model for the dynamics is given by


(
Hn+1 = R0 Hn f (Hn , Pn )
(6.3.1)
Pn+1 = cHn (1 f (Hn , Pn )).

We now consider a specific example of a model of the form (6.3.1).

6.3.1 Example (Nicholson-Bailey model). In this model the proportion of hosts that are not
parasitised is assumed to decay exponentially with the number of parasites, i.e., f (H, P ) =
f (P ) = e aP with a > 0. The resulting model is therefore,
(
Hn+1 = R0 Hn e aPn
(6.3.2)
Pn+1 = cHn (1 e aPn ),

where R0 , a, c > 0. The steady states are found by solving


(
H⇤ = R0 H⇤ e aP⇤
(6.3.3)
P⇤ = cH⇤ (1 e aP⇤ ),

73
which implies 8 ⇣ ⌘
<H 1 R e aP⇤ =0
⇤ 0
(6.3.4)
:P⇤ cH⇤ (1 e aP⇤ ) = 0.
Thus we have a trivial steady state corresponding to (H⇤ , P⇤ )T = 0. If there exists a solution to
8⇣ ⌘
< 1 R e aP⇤ =0
0
(6.3.5)
:P⇤ cH⇤ (1 e aP⇤ ) = 0,

then we may have a nontrivial steady state. From the first line of (6.3.5) we have
1
aP⇤
e =
R0
✓ ◆
1
=) aP⇤ = ln
R0
✓ ◆
1 1 1
=) P⇤ = ln = ln(R0 ).
a R0 a
Substituting this into the second line of (6.3.5) gives
1 ⇣ ⌘
ln(R0 ) = cH⇤ 1 e ln(R0 )
a ⇣ ⌘
= cH⇤ 1 eln(1/R0 )
✓ ◆
1
= cH⇤ 1 .
R0
Thus,
✓ ◆!
1 1
H⇤ = ln(R0 )/ c 1
a R0
R0 ln(R0 )
= .
ac(R0 1)
For biological relevance we require H⇤ , P⇤ 0 thus we require R0 > 1 which gives H⇤ > 0
and P⇤ > 0.
In summary the steady states of (6.3.2) correspond to (H⇤ , P⇤ )T = 0 and if R0 > 1 we have
a second biologically meaningful steady state given by
✓ ◆T
T R0 ln(R0 ) 1
(H⇤ , P⇤ ) = , ln(R0 )
ac(R0 1) a
In order to assess the stability of the steady states we compute the Jacobian
0 ⇣ ⌘ ⇣ ⌘ 1
@ aP @ aP
0 1
R0 He R0 He aP aP
B
B
@H
✓ ⇣ ⌘ ◆ @P
✓ ⇣ ⌘ ◆ C @ ⇣R0 e
C ⌘ aR0 He
A.
J (H, P ) = @ A= aP aP
@ aP @ aP c 1 e acHe
@H cH 1 e @P cH 1 e

74
Hence !
R0 0
J (0, 0) = .
0 0

Thus 1 = R0 and 2 = 0. Thus the trivial steady state (N⇤ , P⇤ )T = 0 is asymptotically stable
if R0 < 1, stable but not asymptotically stable if R0 = 1 and unstable if R0 > 1.
⇣ ⌘T
At the nontrivial steady state, (H⇤ , P⇤ )T = R 0 ln(R0 ) 1
ac(R0 1) a , ln(R 0 ) we have

aP⇤ a(1/a) ln(R0 )


J(H⇤ , P⇤ )1,1 = R0 e = R0 e = R0 e ln(R0 ) = 1
1
J(H⇤ , P⇤ )1,2 = aR0 H⇤ e aP⇤ = aR0 H⇤ = aH⇤
R0
⇣ ⌘ ✓ ◆
1
J(H⇤ , P⇤ )2,1 = c 1 e aP⇤ = c 1
R0
acH⇤
J(H⇤ , P⇤ )2,2 = acH⇤ e aP⇤ = .
R0
We will use the Jury conditions to evaluate the stability of the nontrivial steady state. Let J ⇤ =
J (H⇤ , P⇤ ),

acH⇤
Tr(J ⇤ ) = 1 +
R0
✓ ◆
acH⇤ 1
det(J ⇤ ) = + caH⇤ 1 = acH⇤ .
R0 R0
R0 ln(R0 )
Recall H⇤ = ac(R0 1) , hence

R0 ln(R0 )
det(J ⇤ ) = acH⇤ = > 1,
R0 1
since for R0 > 1,
1 R0 ln(R0 )
ln(R0 ) > 1 =) R0 ln(R0 ) > R0 1 =) > 1.
R0 R0 1
Hence the first of the Jury conditions c.f., Lemma 6.1.2 is not satisfied. The remaining two
conditions are however satisfied as
acH⇤ acH⇤
Tr(J ⇤ ) = 1 + =1+ < 1 + det(J ⇤ )
R0 R0
⇣ ⌘T
R0 ln(R0 ) 1
if R0 > 1. Thus the nontrivial steady state (H⇤ , P⇤ )T = ac(R0 1) , a ln(R0 ) is unstable.

75
Chapter 7

Delay difference equations

We consider here delay difference equations of the form

Nn+1 = f (Nn , Nn 1 ). (7.0.1)

A solution N ⇤ to a delay difference equation of the form (7.0.1) is called a steady state solution
if the solution to (7.0.1) with initial data N ⇤ satisfies

Nn = N ⇤ for all n,

that is, if
N ⇤ = f (N ⇤ , N ⇤ ).

7.0.1 Example (Delay difference logistic equation). The delay difference logistic equation reads
as
Nn+1 = f (Nn , Nn 1 ) = rNn (1 Nn 1 ) (7.0.2)
and admits two steady state solutions, i.e.,
1
N ⇤ = 0 and N⇤ = 1 ,
r
which can be obtained by solving the algebraic equation

N ⇤ = rN ⇤ (1 N ⇤ ).

We will now show via linear stability analysis that the solutions of delay difference equations
can exhibit periodic oscillations.

7.1 Linear stability analysis


To study the linear stability of a steady state solution N ⇤ of (7.0.1) we make the ansatz

Nn = N ⇤ + ⌘n

76
where ⌘n is a small perturbation, i.e., |⌘n | ⌧ 1. Substituting this ansatz into (7.0.1) yields

N ⇤ + ⌘n+1 = f (N ⇤ + ⌘n , N ⇤ + ⌘n 1 ).

Using Taylor expansion we find

⌘n+1 = f (N ⇤ , N ⇤ ) N ⇤ + ⌘n @1 f (N ⇤ , N ⇤ ) + ⌘n 1 @2 f (N

, N ⇤ ) + O(⌘n2 ),
| {z }
=0

where @i denotes the partial derivative with respect to the i0 th argument. Therefore, neglecting
the (small) higher order terms, we obtain the following linearised delay difference equation
governing the evolution of ⌘n

⌘n+1 = ⌘n @1 f (N ⇤ , N ⇤ ) + ⌘n 1 @2 f (N

, N ⇤ ). (7.1.1)

7.1.1 Example (Linear stability analysis of the delay difference logistic equation). For the delay
difference logistic equation (7.0.2) we have

@1 f (N ⇤ , N ⇤ ) = r(1 N ⇤) and @2 f (N ⇤ , N ⇤ ) = rN ⇤ .

Thus equation (7.1.1) reads as

⌘n+1 = r(1 N ⇤ )⌘n rN ⇤ ⌘n 1.

1
In the case where N ⇤ = 1 the above difference equation for ⌘n reads as
r
⌘n+1 = ⌘n + (1 r)⌘n 1.

Let
⌘n+1 = 'n
then
'n+1 = 'n + (1 r)⌘n .
The system of difference equations
(
⌘n+1 = 'n
'n+1 = 'n + (1 r)⌘n

can be written in matrix form as


" # " #" #
⌘n+1 0 1 ⌘n
= .
'n+1 1 r 1 'n
| {z }
J⇤

77
1
We will now use the Jury conditions to evaluate the stability of the steady state N ⇤ = 1 .
r
Since
Tr J⇤ = 1 and det J⇤ = r 1,
the eigenvalues of J⇤ are the roots of the polynomial
2
+ (r 1) = 0.

Hence for the condition | i | < 1 with i = 1, 2 to be satisfied we need

r 1<1 and 1 < (r 1) + 1,

that is,
1 < r < 2.
Since p
5 4r 1±
1,2, =
2
when 1 < r < 5/4 the eigenvalues 1 and 2 will be real and positive. Using the fact that
n n
⌘n = A 1 +B 2

and 1 < 1 and 2 < 1, we conclude that

⌘n ! 0 monotonically as n ! 1.

On the other hand, when 5/4 < r < 2 the eigenvalues 1 and 2 will be complex conjugate,
that is, p
1 ± i 4r 5
1,2 = .
2
Let 1 = ⇢e i✓ and 2 = ⇢ei ✓ , then
p
2 ⇢ei ✓ = 1 + i 4 r 5.
p
Equating the magnitude of 2 ⇢ei ✓ and 1 + i 4 r 5 gives
p p
2⇢ = 1 + (4 r 5) ) ⇢ = r 1.
p
Moreover, equating the real parts and the imaginary parts of 2 ⇢ei ✓ and 1 + i 4r 5 yields
p
2 ⇢ cos(✓) = 1 and 2 ⇢ sin(✓) = 4 r 5.

Hence
sin(✓) p p
tan(✓) = = 4r 5 ) ✓ = arctan( 4 r 5).
cos(✓)
Notice that ⇢ and ✓ are increasing functions of r. Since 5/4 < r < 2 we have

1/2 < ⇢ < 1 and 0 < ✓ < ⇡/3.

78
Using the fact that
n n
⌘n = A 1 +B 2 = A⇢n e in✓
+ B⇢n ei n ✓
and letting
A=a ib and B = a + ib
we find
⌘n = 2 ⇢n a cos(n✓) + i b sin(n✓) .
Since 1/2 < ⇢ < 1 and 0 < ✓ < ⇡/3, if 5/4 < r < 2 then ⌘n will undergo damped oscillations
(i.e. the amplitude of the oscillations will tend to zero as n ! 1) of wavelength
✓ ◆
2⇡ 2⇡ 2⇡
2 , = (6, 1).
✓ ⇡/3 0

On the contrary, at the bifurcation to instability (i.e., when r = 2) periodic oscillations of period
6 will emerge.

79
Appendix A

Background theory

In this Chapter we state (without proof) some theoretical results that are of use.
Consider a steady state N ⇤ = (N⇤ , M⇤ )T of a two-component system
dN
= f (N, M )
dt
dM
= g(N, M ).
dt
The fact that linear stability analysis tells us something relevant about the stability of the steady
state is ensured by the following theorem.
A.1 Theorem (Hartman-Grobaman theorem). Assume the steady state N ⇤ is such that the Jac-
obian matrix evaluated at the steady state has eigenvalues with non-zero real part. Then in
a neighbourhood of the steady state, there is a continuous mapping from the solution of the
linearised problem to the solution of the original nonlinear system.
A.1.1 Remark (Interpretation of the Hartman-Grobman theorem). The theorem, essentially tells
us that qualitatively the inferences from linear stability analysis are valid in a neighbourhood
of the steady state. It is a rigorous justification of our formal Taylor expansion in which we
neglected the higher order terms.
Similar results are available for systems of difference equations (the requirement now is that
the Jacobian matrix has eigenvalues that do not lie on the unit circle, i.e., 1,2 6= 1) and for
delay differential equations.
Note that the Hartman-Grobman theorem tells us nothing in the case that the eigenvalues
have zero real part. We understand such a setting with the aid of the Hopf bifurcation theorem,
which we state in an informal way in the following.
Consider a steady state N ⇤ two component system of the form
dN
= f (N , p),
dt
where p 2 R is a parameter, that we will regard as a bifurcation parameter. A Hopf bifurcation
is said to occur if a pair of complex conjugate eigenvalues of the Jacobian matrix of the system

80
evaluated at N⇤ crosses the imaginary axis with nonzero speed. Let J p0 (N⇤ ) denote the Jacobian
matrix evaluated at the steady state N⇤ .
A Hopf bifurcation occurs at the point p0 if

• The eigenvalues 1,2 are complex in a neighbourhood of p0 , i.e., 1,2 (p) = ↵(p) ± (p)i
in a neighbourhood of p0 .

• ↵(p) = 0 at p = p0 .
and

• @p ↵(p) 6= 0 at p = p0 .

If such a bifurcation occurs, an asymptotically stable steady state loses stability and the Hopf
bifurcation theorem in fact says that for values of p close to p0 a limit cycle or periodic solution
arises, which may or may not be stable.

81
Bibliography

Nicholas F Britton. Essential mathematical biology. Springer Undergraduate Mathematics


Series, 2003.

Leah Edelstein-Keshet. Mathematical models in biology. SIAM Classics in Applied Mathemat-


ics, 2005.

Mark Kot. Elements of mathematical ecology. Cambridge University Press, 2001.

Donald Ludwig, Dixon D Jones, and Crawford S Holling. Qualitative analysis of insect outbreak
systems: the spruce budworm and forest. The Journal of Animal Ecology, pages 315–332,
1978.

James D Murray. Mathematical biology. II Spatial models and biomedical applications.


Springer, 2003a.

James D Murray. Mathematical biology I. An Introduction. Springer, 2003b.

Lee A Segel. Simplification and scaling. SIAM review, 14(4):547–571, 1972.

Murray R Spiegel. Theory and problems of calculus of finite differences and difference equa-
tions. McGraw-Hill, 1971.

82

You might also like