The Azimuth Project
Blog - network theory (part 8)

This page is a blog article in progress, written by John Baez. To see a discussion of this article while it was being written, visit the Azimuth Forum. For the final polished version, go to the Azimuth Blog.

Summer vacation is over! Time to get back to work!

This month Brendan Fong is visiting the Centre for Quantum Technologies and working with me on stochastic Petri nets. He’s proved two interesting results, which he wants to explain.

To understand what he’s done, you need to know how to get the rate equation and the master equation from a stochastic Petri net. We’ve almost seen how. But it’s been a long time since the last article in this series, so today I’ll start with some review. And at the end, just for fun, I’ll say a bit more about how Feynman diagrams show up in this theory.

Since I’m an experienced teacher, I’ll assume you’ve forgotten everything I ever said. This has some advantages! I can change some of my earlier terminology—improve it a bit here and there—and you won’t even notice.

Stochastic Petri nets

Definition. A Petri net consists of a set SS of species and a set TT of transitions, together with a function

i:S×T i : S \times T \to \mathbb{N}

saying how many things of each species appear in the input for each transition, and a function

o:S×T o: S \times T \to \mathbb{N}

saying how many things of each species appear in the output.

We can draw pictures of Petri nets. For example, here’s a Petri net with two species and three transitions:

It should be clear that the transition ‘predation’ has one wolf and one rabbit as input, and two wolves as output.

A ‘stochastic’ Petri net goes further: it also says the rate at which each transition occurs.

Definition. A stochastic Petri net is a Petri net together with a function

r:T[0,) r: T \to [0,\infty)

giving a rate constant for each transition.

Master equation versus rate equation

Starting from any stochastic Petri net, we can get two things. First:

• The master equation. This says how the probability that we have a given number of things of each species changes with time.

• The rate equation. This says how the expected number of things of each species changes with time.

The master equation is stochastic: it describes how probabilities change with time. The rate equation is deterministic.

The master equation is more fundamental. It’s like the equations of quantum electrodynamics, which describe the amplitudes for creating and annihilating individual photons. The rate equation is less fundamental. It’s like the classical Maxwell equations, which describe changes in the electromagnetic field in a deterministic way. The classical Maxwell equations are an approximation to quantum electrodynamics. This approximation gets good in the limit where there are lots of photons all piling on top of each other to form nice waves.

Similarly, the rate equation can be derived from the master equation in the limit where the number of things of each species become large, and the fluctuations in these numbers become negligible.

But I won’t do this derivation today! Nor will I probe more deeply into the analogy with quantum field theory, even though that’s my ultimate goal. Today I’m content to remind you what the master equation and rate equation are.

The rate equation is simpler, so let’s do that first.

The Rate Equation

Suppose we have a stochastic Petri net with kk different species. Let x ix_i be the number of things of the iith species. Then the rate equation looks like this:

dx idt=??? \frac{d x_i}{d t} = ???

It’s really a bunch of equations, one for each 1ik1 \le i \le k. But what is the right-hand side?

The right-hand side is a sum of terms, one for each transition in our Petri net. So, let’s start by assuming our Petri net has just one transition.

Suppose the iith species appears as input to this transition m im_i times, and as output n in_i times. Then the rate equation is

dx idt=r(n im i)x 1 m 1x k m k \frac{d x_i}{d t} = r (n_i - m_i) x_1^{m_1} \cdots x_k^{m_k}

where rr is the rate constant for this transition.

That’s really all there is to it! But we can make it look nicer. Let’s make up a vector

x=(x 1,,x k)[0,) k x = (x_1, \dots , x_k) \in [0,\infty)^k

that says how many things there are of each species. Similarly let’s make up an input vector

m=(m 1,,m k) k m = (m_1, \dots, m_k) \in \mathbb{N}^k

and an output vector

n=(n 1,,n k) k n = (n_1, \dots, n_k) \in \mathbb{N}^k

for our transition. To be cute, let’s also define

x m=x 1 m 1x k m k x^m = x_1^{m_1} \cdots x_k^{m_k}

Then we can write the rate equation for a single transition like this:

dxdt=r(nm)x m \frac{d x}{d t} = r (n-m) x^m

Next let’s do a general stochastic Petri net, with lots of transitions. Let’s write TT for the set of transitions and r(τ)r(\tau) for the rate constant of the transition τT\tau \in T. Let n(τ)n(\tau) and m(τ)m(\tau) be the input and output vectors of the transition τ\tau. Then the rate equation is:

dxdt= τTr(τ)(n(τ)m(τ))x m(τ) \frac{d x}{d t} = \sum_{\tau \in T} r(\tau) (n(\tau) - m(\tau)) x^{m(\tau)}

For example, consider our rabbits and wolves:

Suppose

  • the rate constant for ‘birth’ is β\beta

  • the rate constant for ‘predation’ is γ\gamma

  • the rate constant for ‘death’ is δ\delta

Let x 1(t)x_1(t) be the number of rabbits and x 1(t)x_1(t) the number of wolves at time tt. Then the rate equation looks like this:

dx 1dt=βx 1γx 1x 2 \frac{d x_1}{d t} = \beta x_1 - \gamma x_1 x_2
dx 2dt=γx 1x 2δx 2 \frac{d x_2}{d t} = \gamma x_1 x_2 - \delta x_2

The Master Equation

Now let’s do something new. In Part 6 I explained how to write down the master equation for a stochastic Petri net with just one species. Now let’s generalize that. Luckily, the ideas are exactly the same.

So, suppose we have a stochastic Petri net with kk different species. Let ψ n 1,,n k\psi_{n_1, \dots, n_k} be the probability that we have n 1n_1 things of the first species, n 2n_2 of the second species, and so on. The master equation will say how all these probabilities change with time.

To keep the notation clean, let’s introduce a vector

n=(n 1,,n k) k n = (n_1, \dots, n_k) \in \mathbb{N}^k

and let

ψ n=ψ n 1,,n k \psi_n = \psi_{n_1, \dots, n_k}

Then, let’s take all these probabilities and cook up a formal power series that has them as coefficients: as we’ve seen, this is a powerful trick. To do this, we’ll bring in some variables z 1,,z kz_1, \dots, z_k and write

z n=z 1 n 1z k n k z^n = z_1^{n_1} \cdots z_k^{n_k}

as a convenient abbreviation. Then any formal power series in these variables looks like this:

Ψ= n kψ nz n \Psi = \sum_{n \in \mathbb{N}^k} \psi_n z^n

We call Ψ\Psi a state if the probabilities sum to 1 as they should:

nψ n=1 \sum_n \psi_n = 1

The simplest example of a state is a monomial:

z n=z 1 n 1z k n k z^n = z_1^{n_1} \cdots z_k^{n_k}

This is a state where we are 100% sure that there are n 1n_1 things of the first species, n 2n_2 of the second species, and so on. We call such a state a pure state, since physicists use this term to describe a state where we know for sure exactly what’s going on. Sometimes a general state, one that might not be pure, is called mixed.

The master equation says how a state evolves in time. It looks like this:

ddtΨ(t)=HΨ(t) \frac{d}{d t} \Psi(t) = H \Psi(t)

So, I just need to tell you what HH is!

It’s called the Hamiltonian. It’s a linear operator built from special operators that annihilate and create things of various species. Namely, for each state 1ik1 \le i \le k we have a creation operator

a iΨ=ddz iΨ a_i \Psi = \frac{d}{d z_i} \Psi

and a creation operator:

a i Ψ=z iΨ a_i^\dagger \Psi = z_i \Psi

How do we build HH from these? Suppose we’ve got a stochastic Petri net whose set of transitions is TT. As before, write r(τ)r(\tau) for the rate constant of the transition τT\tau \in T, and let n(τ)n(\tau) and m(τ)m(\tau) be the input and output vectors of this transition. Then:

H= τTr(τ)(a n(τ)a m(τ))a m(τ) H = \sum_{\tau \in T} r(\tau) \, ({a^\dagger}^{n(\tau)} - {a^\dagger}^{m(\tau)}) \, a^{m(\tau)}

where as usual we’ve introduce some shorthand notations to keep from going insane. For example:

a m(τ)=a 1 m 1(τ)a k m k(τ) a^{m(\tau)} = a_1^{m_1(\tau)} \cdots a_k^{m_k(\tau)}

and

a m(τ)=a 1 m 1(τ)a k m k(τ) {a^\dagger}^{m(\tau)} = {a_1^\dagger }^{m_1(\tau)} \cdots {a_k^\dagger}^{m_k(\tau)}

Now, it’s not surprising that each transition τ\tau contributes a term to HH. It’s also not surprising that this term is proportional to the rate constant r(τ)r(\tau). The only tricky thing is the expression

(a n(τ)a m(τ))a m(τ) ({a^\dagger}^{n(\tau)} - {a^\dagger}^{m(\tau)})a^{m(\tau)}

How can we understand it? The basic idea is this. We’ve got two terms here. The first term:

a n(τ)a m(τ) {a^\dagger}^{n(\tau)} a^{m(\tau)}

describes how m i(τ)m_i(\tau) things of the iith species get annihilated, and n i(τ)n_i(\tau) things of the iith species get created. Of course this happens thanks to our transition τ\tau. The second term:

a m(τ)a m(τ) - {a^\dagger}^{m(\tau)} a^{m(\tau)}

is a bit harder to understand, but it says how the probability that nothing happens—that we remain in the same pure state—decreases as time passes. Again this happens due to our transition τ\tau.

In fact, the second term must take precisely the form it does to ensure ‘conservation of total probability’. In other words: if the probabilities ψ n\psi_n sum to 1 at time zero, we want these probabilities to still sum to 1 at any later time. And for this, we need that second term to be what it is! In Part 6 we saw this in the special case where there’s only one species. The general case works the same way.

Let’s look at an example. Consider our rabbits and wolves yet again:

and again suppose the rate constants for birth, predation and death are β\beta, γ\gamma and δ\delta, respectively. We have

Ψ= nψ nψ n \Psi = \sum_n \psi_n \psi^n

where ψ n=ψ n 1,n 2\psi_n = \psi_{n_1, n_2} is the probability of having n 1n_1 rabbits and n 2n_2 wolves. These probabilities evolve according to the equation

ddtΨ(t)=HΨ(t) \frac{d}{d t} \Psi(t) = H \Psi(t)

where the Hamiltonian is

H=βB+γC+δD H = \beta B + \gamma C + \delta D

and BB, CC and DD are operators describing birth, predation and death, respectively. (BB is for birth, DD is for death… and you can call predation ‘consumption’ if you want something that starts with CC.) What are these operators? Just follow the rules I described:

B=a 1 2a 1a 1 a 1 B = {a_1^\dagger}^2 a_1 - a_1^\dagger a_1
C=a 2 2a 1a 2a 1 a 2 a 1a 2 C = {a_2^\dagger}^2 a_1 a_2 - a_1^\dagger a_2^\dagger a_1 a_2
D=a 2a 2 a 2 D = a_2 - a_2^\dagger a_2

In each case, the first term is easy to understand:

  • Birth annihilates one rabbit and creates two rabbits.

  • Predation annihilates one rabbit and one wolf and creates two wolves.

  • Death annihilates one wolf.

The second term is trickier, but I told you how it works.

Feynman diagrams

How do we solve the master equation? If we don’t worry about mathematical rigor too much, it’s easy. The solution of

ddtΨ(t)=HΨ(t) \frac{d}{d t} \Psi(t) = H \Psi(t)

should be

Ψ(t)=e tHΨ(0) \Psi(t) = e^{t H} \Psi(0)

and we can hope that

e tH=1+tH+(tH) 22!+ e^{t H} = 1 + t H + \frac{(t H)^2}{2!} + \cdots

so that

Ψ(t)=Ψ(0)+tHΨ(0)+t 22!H 2Ψ(0)+ \Psi(t) = \Psi(0) + t H \Psi(0) + \frac{t^2}{2!} H^2 \Psi(0) + \cdots

Of course there’s always the question of whether this power series converges. In many contexts it’s not, but that’s not necessarily a disaster: the series can still be asymptotic to the right answer, or Borel summable to the right answer.

But let’s not worry about these subtleties yet! Let’s just imagine our rabbits and wolves, with Hamiltonian

H=βB+γC+δD H = \beta B + \gamma C + \delta D

Now, imagine working out

Ψ(t)=Ψ(0)+tHΨ(0)+t 22!H 2Ψ(0)+t 33!H 3Ψ(0)+ \Psi(t) = \Psi(0) + t H \Psi(0) + \frac{t^2}{2!} H^2 \Psi(0) + \frac{t^3}{3!} H^3 \Psi(0) + \cdots

We’ll get lots of terms involving products of BB, CC and DD hitting our original state Ψ(0)\Psi(0). And we can draw these as diagrams! For example, suppose we start with one rabbit and one wolf. Then

Ψ(0)=z 1z 2 \Psi(0) = z_1 z_2

And suppose we want to compute

H 3Ψ(0)=(βB+γC+δD) 3Ψ(0) H^3 \Psi(0) = (\beta B + \gamma C + \delta D)^3 \Psi(0)

as part of the task of computing Ψ(t)\Psi(t). Then we’ll get lots of terms: 27, to be precise. Let’s take one of these terms, for example the one proportional to:

DCBΨ(0) D C B \Psi(0)

We can draw this as a sum of Feynman diagrams, including this:

In this diagram, we start with one rabbit and one wolf at top. As we read the diagram from top to bottom, first a rabbit is born (BB), then predation occur (CC), and finally a wolf dies (DD). The end result is again a rabbit and a wolf.

This is just one of four Feynman diagrams we should draw in our sum for DCBΨ(0)D C B \Psi(0), since either of the two rabbits could have been eaten, and either wolf could have died. So, the end result of computing

H 3Ψ(0) H^3 \Psi(0)

will involve a lot of Feynman diagrams… and of course computing

Ψ(t)=Ψ(0)+tHΨ(0)+t 22!H 2Ψ(0)+t 33!H 3Ψ(0)+ \Psi(t) = \Psi(0) + t H \Psi(0) + \frac{t^2}{2!} H^2 \Psi(0) + \frac{t^3}{3!} H^3 \Psi(0) + \cdots

will involve even more, even if we get tired and give up after the first few terms. So, this Feynman diagram business may seem quite tedious… and it may not be obvious how it helps.

But it does, sometimes!

Now is not the time for me to describe the ‘practical’ benefits of Feynman diagrams. Instead, I’ll just point out one conceptual benefit. We started with what seemed like a purely computational chore, namely computing

Ψ(t)=Ψ(0)+tHΨ(0)+t 22!H 2Ψ(0)+ \Psi(t) = \Psi(0) + t H \Psi(0) + \frac{t^2}{2!} H^2 \Psi(0) + \cdots

But then we saw—at least roughly‐how this series could be seen as having a clear meaning! It can be written as a sum over diagrams, each of which represents a possible history of rabbits and wolves. So, it’s what physicists call a ‘sum over histories’.

Feynman invented the idea of a sum over histories in the context of quantum field theory. At the time this idea seemed quite mind-blowing, for various reasons. First, it involved elementary particles instead of everyday things like rabbits and wolves. Second, it involved complex ‘amplitudes’ instead of real probabilities. Third, it actually involved integrals instead of sums. And fourth, a lot of these integrals diverged, giving infinite answers that needed to be ‘cured’ somehow!

Now we’re seeing a sum over histories in a more down-to-earth context without all these complications. A lot of the underlying math is analogous… but now there’s nothing mind-blowing about it: it’s quite easy to understand!

category: blog