- quick calculus brushup: We all remember what a derivative is.
Given a function (of a time or a space variable, say), it's defined as
the limit of the change in the function divided by the change in the
variable as the change in the variable gets smaller and smaller.
That is, for f(t), f'(t) = lim (DELTA f) / (DELTA t) as (DELTA t) --> 0.
This definition is the starting point for derivations of equations
involving derivatives in physical systems (differential equations),
and also provides a way to think about numerical solution of those equations.
So if (DELTA t) is small enough, the change (Delta f) at t due to a
change (DELTA t) in t is approximately equal to f'(t)*(Delta t).
- the simplest class of differential equations, ``ordinary differential
equations'' or ODEs, have only one independent parameter, often time.
- these are ubiquitous; examples:
- electrical circuits; arise because the voltage across an
inductor is proportional to the derivative of the current through it,
and the current through a capacitor is proportional to the derivative
of the voltage across it. For examples of ODEs arising in simple
electrical circuits, see [GT96, section 5.7].
- orbital mechanics; arise because of Newton's laws of motion. Recall
that F = ma, and that a is the second derivative of position. Equations
for falling bodies, for example, lead to ODEs. See [GT96, chapter 3:
``The Motion of Falling Objects''].
- chemical reactions; arise because the rate at which chemical
concentrations change proceed often depend on the concentrations themselves.
See [GT96, section 5.8, project 5.1].
- population dynamics; arise because the rate of change in populations
can often be related to populations. The simplest example is provided by
a single-species population. Suppose at any time there are N individuals.
Without limits on growth of any kind, the rate of change of N is proportional
to N. That is, N'(t) = rN, and the solution to this is exponential growth.
Of course in real situations there will be counterbalancing effects, like
predators, limitations on food, oxygen, contamination by waste products, etc.
For lots of examples, see [EK88, chapter 6: ``Applications of Continuous
models to Population Dynamics].
Similar laws govern the growth of tumors [EK88, section 6.1];
predator-prey systems (classically hares and lynxes in Canada)
[EK88, section 6.2]; parasite-host systems [EK88, section 6.5];
and epidemics [EK88, section 6.6];
- and so on, ad infinitum; the two books [GT96] and [EK88],
(on reserve) are rich sources of examples in physics and biology, respectively.
- The state-space formulation is a very productive way to look
at ODEs. We view the dependent variable as an n-dimensional
state vector x,
and transform whatever differential equations we start with to the
form x'(t) = g(x(t), t). The interpretation of this form is that
at any time t we are at a point in n-space, and that the point and the
time t tell us the derivative of the vector x. That's all we need to know
what happens next; in an infinitesimal time increment dt, the vector x
moves to the new point x + dx = x + g(x,t)*dt. (Of course if we approximate
this with non-infinitesimal dt, this is an approximation to reality, and
in fact this most simple numerical method is called Euler's method,
about which more later.
- Note that the derivative of a vector is just the vector formed from
the derivatives of each component.
- When can we put things in state-space form? It turns out that we
can very often. (Question for cogitation: when can we not?)
Here's a simple but very typical example.
Many common physical systems can be described quite well
by a second-order (having second derivatives) ODE of the form
y'' + a*y' + by = f(t), where f(t) is an arbitrary ``driving''
function, and the dependent variable is the scalar y. This is
not in state space form. Such an equation can arise from
a tuned RLC electrical circuit, or a spring-mass-dashpot mechanical
system, for example.
Let x1 = y and x2 = y'. Then x1' = x2, and x2' = y'' =
-b*x1 -a*x2 + f(t). If we take the state vector to be [x1, x2],
the original equation is now in state-space form. In this case,
the state space is two-dimensional.
- ODEs with two-dimensional state spaces can be viewed graphically
in a beautifully revealing way. We plot the trajectory of the solution
x(t) in the x-plane, that is, in the x1-x2 plane. (When x2 = x1',
this is called the phase plane.)
Note that this picture is complete only
for systems with two-dimensional state space, although sometimes
two variables from higher dimensional systems are plotted in a similar way.
The predator-prey system described by the Volterra-Lotka
equations provides surprisingly rich examples of many typical and
important kinds of behavior (see [Dic91] and [EK88]).
Historically, this system arose from the observation by Italian
fishermen that two fish species tended to oscillate in ways that
were correlated with each other.
The relatively simple Volterra-Lotka equations provide examples of
(1) periodic oscillations (closed contours), (2) stable foci
(convergence to a fixed point), and (3) limit cycles (closed
contours that attract nearby flow).