A sequence of random variables \(X_n\) converges to \(X=0\) in probability if, for any small number \(\epsilon > 0\), the probability that \(X_n\) is different from 0 by more than \(\epsilon\) shrinks to nothing as \(n\) gets larger. Formally:
\( \lim_{n \to \infty} P(|X_n - 0| > \epsilon) = 0 \)
For this specific experiment where \(X_n\) can only be 0 or 1, this simplifies to observing that the total probability of \(X_n\) being 1 approaches zero: \( \lim_{n \to \infty} P(X_n = 1) = 0 \).
This is different from almost sure convergence, which would require that for any given outcome \(\omega\), the sequence of values \(X_n(\omega)\) eventually becomes 0 and *stays* 0 forever. As you will see, this is not the case in this experiment, providing a classic example of a sequence that converges in probability but not almost surely.
The sequence of random variables \(X_n\) is defined such that the width of the interval where \(X_n=1\) gets smaller for larger \(n\). This width is exactly the probability \(P(X_n=1)\). The animation below demonstrates that this probability approaches zero.