To install the package, please type
install.packages("waydown")
in your R console.
A single-species population dynamics model with Allee effect is governed by the following differential equation:
$$ \frac{dN}{dt} = r N \left( \frac{N}{A} - 1 \right) \left( 1 - \frac{N}{K} \right) $$
It is easy to see that this differential equation has three equilibrium points, N = 0, N = K and N = A, being all of them stable but the latter one, which is unstable. We’ll use the parameters r = 1, A = 0.5 and K = 1.
We can use our method approxPot1D
to approximate the
potential function at a set of points. First, we have to create the
points.
and then pass them to our algorithm:
By plotting the result, we clearly see that the two stable equilibria appear at N = 0 and N = K = 1, and the unstable one at N = A = 0.5, as we expected.
In this section we’ll apply our method to a collection two dimensional systems.
We generated some abstract, synthetic examples in order to test our method. Here we present some of them.
In this section we’ll deal with the two-dimensional differential equation given by:
$$ \begin{cases} \frac{dx}{dt} = f(x,y) = -x(x^2-1) \\ \frac{dy}{dt} = g(x,y) = -y(y^2-1) \end{cases} $$
This is a gradient system (because $\frac{\partial f}{\partial x} = \frac{\partial g}{\partial y}$ everywhere). This means that the gradient - curl decomposition will have zero curl term everywhere and, thus, there exists a well defined potential. Particularly, the potential can be analytically proven to be:
$$ V(x,y) = \frac{x^2}{4}(x^2 - 2) + \frac{y^2}{4}(y^2 - 2) + V_0 $$
Let’s try to compute it using our algorithm. First, we’ll code our function as a vector:
Our region of interest is now two-dimensional. We need, thus, two vectors to create our grid of points:
Now we are ready to apply approxPot2D
:
result
is a list that contains two fields:
result$V
contains the estimated values of the
potentials at each grid pointresult$err
contains the estimated error at each grid
pointBy plotting them we see:
Provided our example is a purely gradient system, it should not surprise us that the error is zero everywhere.
## [1] TRUE
In this example we will apply our algorithm to the system given below:
$$ \begin{cases} \frac{dx}{dt} = f(x,y) = -y \\ \frac{dy}{dt} = g(x,y) = x \end{cases} $$ This is an extreme case. The gradient - curl decomposition will give us zero gradient part everywhere. Let’s feed our algorithm with these example to see what happens:
First, we code the dynamics in vector form:
Secondly, we define our region of interest:
And then we are ready to apply our algorithm:
The resulting approximate potential is plotted below. Being this a purely non-gradient system we expect our pseudopotential to not be trustworthy. By calculating the error we can see that actually that’s the case.
The fact that the underlying equations are non-gradient has been captured by the algorithm.
Here we apply our methods to some dynamical equations well known in biology. While the abstract equations in the previous sections can be manipulated to increase or decrease their curl to gradient ratio, equations describing natural dynamical systems don’t allow such a manipulation. Once again, the error map will let us know if our system allows a pseudopotential or not.
A bistable network cell fate model can be described by the set of equations:
$$ \begin{cases} \frac{dx}{dt} = f(x,y) = b_x - r_x x + \frac{a_x}{k_x + y^n} \\ \frac{dy}{dt} = g(x,y) = b_y - r_y y + \frac{a_y}{k_y + x^n} \end{cases} $$
Such a system represents two genes (x and y) that inhibit each other. This circuit works as a toggle switch with two stable steady states, one with dominant x , the other with dominant y.
We can code it in vector form:
# Parameters
bx <- 0.2
ax <- 0.125
kx <- 0.0625
rx <- 1
by <- 0.05
ay <- 0.1094
ky <- 0.0625
ry <- 1
n <- 4
# Dynamics
f <- function(x) {c(bx - rx*x[1] + ax/(kx + x[2]^n),
by - ry*x[2] + ay/(ky + x[1]^n))}
This set of equations is, in general, not gradient (because $\frac{\partial f}{\partial y} \neq \frac{\partial
g}{\partial x}$). Anyways, we can use the method
approxPot2D
to compute the approximate potential.
First, we need to define our region of interest:
And then we are ready to apply our algorithm:
The resulting approximate potential is plotted below. Being this not a gradient system it is advisable to plot also the estimated error. The areas in green represent small approximation error, so the potential can be safely used in those regions.
The Selkov model for glycolysis reads like:
$$ \begin{cases} \frac{dx}{dt} & = -x + ay + x^2 y \\ \frac{dy}{dt} & = b - a y - x^2 y \end{cases} $$
where x and y represent the concentrations of two chemicals. If we fix a = 0.1, such a system shows a limit cycle for b ∈ [0.42, 0.79]. At each side of this interval, a Hopf bifurcation happens.
We can code it in vector form:
# Parameters
a <- 0.1
b <- 0.5
# Dynamics
f <- function(x) {c(-x[1] + a*x[2] + x[1]^2*x[2],
b - a*x[2] - x[1]^2*x[2])}
This system is interesting because the Jacobian is independent of b, so our error map will remain constant along the bifurcation. Let’s see what happens:
Here we will use a variation of the classical Lotka-Volterra predator prey model. Particularly, the Rosenzweig-MacArthur model, that adds a function g accounting for predator saturation and a carrying capacity k to the prey growth.
The dynamics, being x the prey biomass and y the predator biomass, look like:
$$ \begin{cases} \frac{dx}{dt} = f(x,y) = r x (1 - \frac{x}{k}) - g(x) x y \\ \frac{dy}{dt} = g(x,y) = e g(x) x y - m y \end{cases} $$
with g(x) being a saturation function:
$$ g(x) = \frac{1}{h + x} $$
We can code it in vector form:
# Parameters
r <- 1
k <- 10
h <- 2
e <- 0.2
m <- 0.1
# Auxiliary function
g <- function(x) {1/(h + x)}
# Dynamics
f <- function(x) {c(r*x[1]*(1 - x[1]/k) -g(x[1])*x[1]*x[2],
e*g(x[1])*x[1]*x[2] - m*x[2])}
Such a system has a limit cycle attractor, as we can see simulating one of its trajectories (particularly, the one beginning at x = 1 and y = 2):
For such a system, we expect our pseudopotential to have a high error over large portions of the phase plane. Let’s check it. First, we need to define our region of interest:
And then we are ready to apply our algorithm:
Even for highly non-gradient systems, our method will compute some pseudopotential. By plotting the estimated error, we can see that it is high almost everywhere. This means that our algorithm noticed that the system is highly non-gradient, and thus, the previously computed pseudopotential is of very limited use.