Computing Pi using the Monte Carlo Method
The Monte Carlo method is a computational algorithm that relies on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
To estimate the value of Pi, we can use the Monte Carlo method. Imagine a square with side length 2r, centered at the origin. A circle with radius r is inscribed within this square. The area of the square is (2r)^2 = 4r^2, and the area of the circle is πr^2. The ratio of the area of the circle to the area of the square is (πr^2) / (4r^2) = π/4.
If we randomly generate a large number of points within the square, the ratio of the number of points that fall inside the circle to the total number of points generated should be approximately equal to the ratio of the areas, which is π/4.
So, if we generate N points, and M of them fall inside the circle, then M/N ≈ π/4, which means π ≈ 4 * (M/N).
Chernoff Bound and Convergence Analysis
The Chernoff bound provides a theoretical guarantee on the accuracy of our Monte Carlo estimate. Each point we generate can be treated as an independent Bernoulli trial, where "success" means the point falls inside the circle. The probability of success is p = π/4.
Let X be the number of points that fall inside the circle out of N total points. Then X follows a binomial distribution with parameters N and p = π/4. The Chernoff bound states that for any δ > 0:
This means that the probability of our estimate deviating from the true value by more than a factor of δ decreases exponentially with N. In other words, as we increase the number of sample points N, we get exponentially better confidence that our estimate is close to the true value of π.
For a desired error bound ε and confidence level (1 - δ), we can determine the required number of samples N using:
where p = π/4. This gives us a concrete relationship between the number of samples, the accuracy of our estimate, and our confidence in that accuracy.
In this experiment, we will simulate this process to estimate the value of Pi.