Le Poisson

It was the spring of 2003; Toby Berger and I were both in the common room of Phillips Hall, and there was half an hour before his information theory class was set to start. Toby was advising my honors thesis, a project that was exploring mathematical models of how neurons transmit information. I approached Toby to ask about one of the assumptions he had made in his model, and a few minutes later, he took out a sheet of paper and started deriving an analogue of the central limit theorem for Poisson processes. I tried keeping up as he wrote down the steps, justified assumptions where he needed to, and eventually yielded a beautiful result. I left that discussion wanting the power to do what he had just done.

What Toby had done was derive a model for the probability distribution of the inter-arrival times between the spikes of a neuron, and he was explaining the rationale for one part of this model. Specifically, he was trying to justify to me how even though the inter-arrival times of individual neural spikes do not behave like a Poisson process (due to a refractory period), we could look at the afferent spike arrivals into a neuron as Poisson.

I referenced Toby’s derivation in some unpublished work during grad school, but my thoughts never returned to it until a holiday party this past week. During the party, someone posed a question that made me think about the frequency with which a person does an activity, and my mind settled upon Toby’s Poisson limit derivation. Like the spikes of a neuron, an activity, e.g. brushing one’s teeth, can be modeled as a renewal process. We were specifically talking about the distribution of inter-arrival times and how the average per person behavior over a population might compare against an individual’s activity.

I was positing cases in which the distribution of an individual’s activity might be bimodal. For instance, some people might brush their teeth twice a day while others might only brush once a day. The limiting argument then immediately started nagging at me because the inter-arrival distribution of a Poisson process isn’t bimodal, and I wanted to understand how the limiting argument would collapse the distribution into a single mode.

Toby’s Derivation (Asymptotic)

Toby’s argument is a good starting point to think about this problem. Let’s consider m renewal processes. Each process could represent the spikes of a single neuron or the times at which a specific person brushes his or her teeth. Given some fixed time t, we can model the time to the next arrival as non-negative random variables

X_1, \ldots, X_m \sim F_X,

where F_X is a cumulative distribution function that we will assume is continuously differentiable over [0, \infty) and where the probability density function f_X(x) has the property that f_X(0) > 0. I am going to wave my hands that this last assumption isn’t so severe if we assume that the processes are out of phase with one another with respect to independent random phases and that is an assumption one would want to make, anyway.

Let’s superimpose all of the events of these m renewal processes on a single timeline. I will now derive the result as m \to \infty, the superimposed process will look Poisson. By definition, this means that the inter-arrival times will look exponentially distributed. To see this, note that we can write the distribution of the time until the next arrival from our starting point t as

Z_m = m \cdot \min\{X_1, \ldots, X_m \}~.

The multiplicative scaling factor m in Z_m above can be thought of as a per process averaging term since as we superimpose more processes, the points become closer together.

The goal is to argue that Z_m is exponentially distributed. To do this, we can decompose the probability as follows:

  1. By definition, \mathbb{P}(Z_m \geq z) =\mathbb{P}(X_1 \geq z/m, \ldots, X_m \geq z/m )
  2. By independence, \mathbb{P}(Z_m \geq z) = \prod_{i=1}^m \mathbb{P}(X_i \geq z/m)
  3. By identical distributedness, \mathbb{P}(Z_m \geq z) = (1 - F_X(z/m))^m
  4. Taking the limit as k \to \infty via L’Hospital’s rule, we have that \lim_{k \to \infty} \mathbb{P}(Z_m \geq z) = e^{-z f_X(0)}
  5. This limit is exponentially distributed, indicating that our inter-arrival times are exponentially distributed, so by definition the limiting arrival process is Poisson.

Finite Derivation

Let’s return to step 3 in Toby’s derivation, which is just before one takes the limit. We can rewrite this as

\mathbb{P}(Z_m \geq z) = e^{m \ln (1 - F_X(z/m))}~.

We can now apply one of the fundamental inequalities of information theory to the equation above: \ln x \leq x - 1, which can also be written as \ln x \geq 1 - 1/x. By monotonicity of the exponent, our inequality gives the upper and lower bounds

e^{-\frac{m F_X(z/m)}{1 - F_X(z/m)}} \leq \mathbb{P}(Z_m \geq z) \leq e^{-m F_X(z/m)}~.

Note that the denominator in the exponent of the lower bound 1 - F_X(z/m) converges to 1 as m \to \infty. Furthermore, we see that as m \to \infty, the quantity m F_X(z/m) approaches z f_X(0), which results in the exponential limit of the inter-arrival distribution of both bounds. If we examine this quantity more closely, m dilates m F_X(z/m) so that any modes in the distribution are effectively pushed closer together around 0, and it’s directly connected to the fact that we scale the \min operation by the multiplicative factor m.

Advertisements
This entry was posted in Biology, Information Theory, Probability. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s