## Continuous Time Markov Chains

Most of our models will be formulated as continuous time Markov chains.
On this page we describe the workings of these processes and how we
simulate them.
** Definition. ** We say that an event happens at **rate** r(t) if
an occurence between times *t* and *t + dt* has probability about
r(t)*dt* when *dt* is small.

** Fact. ** When r(t) is a constant *r*, the times
t[i] between occurrences are independent exponentials with mean
*1/r*, and we have a **Poisson process with
rate** *r*.

** Markov chains in continuous time **
are defined by giving the rates q(x,y) at which jumps occur from
state *x* to state *y*. In many cases (including
all our examples) q(x,y) can be written as
p(x,y)*Q* where *Q* is a constant that represents the
total jump rate.
In this case we can construct the chain by taking one step according
to the transition probability p(x,y) at each point of a Poisson
process with rate *Q*.

If we throw away the information about the exponential holding times in
each state, the resulting sequence of states visited is a discrete
time Markov chain, which is called the
**embedded discrete time chain**.
In our simulations, the total flip rate *Q* at any one
time is a multiple of the number of sites, *CQ*.
Since the number of sites is typically
tens of thousands, we lose very little accuracy by simulating
*TCQ* steps and calling the result the state at time *T*.

To build the discrete time chain we must pick from the various
transitions with probabilities proportional to their rates.
In our particle systems we can do this by picking
a site at random, applying a stochastic updating rule, and then
repeating the procedure. Because of this, continuous time is
occasionally referred to as **asynchronous updating**. This is
to distinguish that porcedure
from the **synchronous updating** of a discrete time process
which updates all of the sites simultaneously.

On to the next page or
back to the survey contents page