Poisson Process
Experiment 1: Emergence of the Poisson Process from Bernoulli Trials
1.1 Overview
The Poisson distribution arises naturally as a limit of the Binomial distribution when the number of trials becomes large and the success probability becomes small, such that the expected number of successes remains constant. This is a key idea in stochastic processes and models real-world events such as radioactive decay, call arrivals, and queue arrivals.
Let Xn∼Bin(n,p) denote a Binomial random variable, where:
- n is the number of independent Bernoulli trials.
- p is the probability of success.
- λ=np is the expected number of successes.
We are interested in the behavior of Xn as n→∞, p→0, such that λ=np is held constant.
1.3 Poisson Limit Theorem (Rigorous Proof)
Let Xn∼Bin(n,λ/n). Then:
n→∞limP(Xn=k)=k!e−λλk,for all k∈N0.
Proof:
P(Xn=k)=(kn)(nλ)k(1−nλ)n−k=k!n(n−1)…(n−k+1)⋅(nkλk)⋅(1−nλ)n⋅(1−nλ)−k.
Using the limits:
- nkn(n−1)…(n−k+1)→1,
- (1−nλ)n→e−λ,
- (1−nλ)−k→1,
the expression converges to:
k!λke−λ.
This proves convergence in distribution to the Poisson distribution.
Experiment 2: Merging and Splitting of Poisson Processes
2.1 Merging of Independent Poisson Processes
Let N1(t)∼Poisson(λ1t), N2(t)∼Poisson(λ2t), and assume independence.
Define N(t)=N1(t)+N2(t).
Theorem: Superposition
N(t)∼Poisson((λ1+λ2)t).
Proof:
Let X∼Poisson(λ1t),Y∼Poisson(λ2t), then:
P(X+Y=k)=i=0∑kP(X=i)P(Y=k−i)
=e−(λ1+λ2)ti=0∑ki!(λ1t)i⋅(k−i)!(λ2t)k−i
=k!e−(λ1+λ2)ti=0∑k(ik)(λ1t)i(λ2t)k−i=k!e−(λ1+λ2)t(λ1t+λ2t)k
=k!(λ1+λ2)ktke−(λ1+λ2)t.
This is the PMF of a Poisson((λ1+λ2)t) process.
2.2 Splitting (Thinning) of Poisson Process
Let N(t)∼Poisson(λt), and each event is assigned independently with probability p to stream 1, and with 1−p to stream 2.
Define N1(t),N2(t) to be the number of events in each stream.
Theorem: Thinning
N1(t)∼Poisson(pλt),N2(t)∼Poisson((1−p)λt), and N1,N2 are independent.
Proof:
Given N(t)=n, the number of events in stream 1 is Bin(n,p).
By total probability:
P(N1(t)=k)=n=k∑∞P(N(t)=n)(kn)pk(1−p)n−k
=n=k∑∞n!(λt)ne−λt⋅(kn)pk(1−p)n−k.
Rewriting:
=k!(pλt)ke−λtn=k∑∞(n−k)!((1−p)λt)n−k=k!(pλt)ke−λte(1−p)λt=k!(pλt)ke−pλt
This proves N1(t)∼Poisson(pλt). Independence follows by construction.
Here please note that in experiments that follow, we will be modifying only \lambda_1 + \lambda_2
Interarrival Times in Poisson Process
Let T1,T2,… be arrival times in a homogeneous Poisson process with rate λ.
Define interarrival times Sn=Tn−Tn−1 (with T0=0).
Theorem: Exponential Interarrival Times
Each Sn∼Exp(λ), and the Sn are i.i.d.
Proof:
The probability that no event occurs in [0,t] is:
P(S1>t)=P(N(t)=0)=e−λt
So the CDF of S1 is:
FS1(t)=1−e−λt⇒fS1(t)=λe−λt,t≥0
By the strong Markov property, the process after the first arrival is again a Poisson process (memoryless). Hence, S2,S3,⋯∼Exp(λ) i.i.d.