🧮 MPI Pi Calculation Lab

Parallel Computing Simulation

Experiment Method

System Configuration

1.0x

Simulation Control

Simulation Logs

00:00:00 System initialized and ready

MPI Code Download

Download complete MPI implementation for Pi calculation

Configuration

4
MPI Processes
Number of parallel processes
3.14159
Current π Value
Calculated value
Monte Carlo
Method
Calculation approach
📱➡️📲

Please Rotate Your Device

For the best experience, please rotate your device to landscape mode to view this MPI Pi calculation simulation.

🧮 MPI Pi Calculation Guide

🎯 Calculation Methods

  • 🎯 Monte Carlo Method:
    • Randomly throws "darts" at a unit square containing a quarter circle
    • Counts how many points land inside vs outside the circle
    • Uses ratio to estimate π: π ≈ 4 × (points_inside / total_points)
    • More random samples = better accuracy
    • Visual: Red dots (outside) and green dots (inside) on circle
  • 📊 Riemann Sum Method:
    • Approximates area under quarter circle using rectangular slices
    • Each slice has width Δx and height based on circle equation
    • Sums all slice areas to approximate quarter circle area
    • Uses area to calculate π: π ≈ 4 × quarter_circle_area
    • Visual: Colored rectangles filling the space under the curve

🔧 System Configuration

  • Number of Processes: Choose 2, 4, 8, or 16 MPI processes for parallel computation
  • Precision Settings:
    • Monte Carlo: Number of random points (1K to 1M)
    • Riemann Sum: Number of rectangular slices (1K to 1M)
    • Higher precision = better accuracy but longer computation time
  • Animation Speed: Control visualization speed (0.25x to 3x) for better observation
  • Parallel Strategy: Work is divided equally among all processes

🚀 MPI Parallel Algorithm

  1. Phase 1 - Work Distribution (Scatter):
    • Master process divides total work among all processes
    • Each process gets approximately equal number of points/slices
    • Process assignment shown in process grid with different colors
  2. Phase 2 - Parallel Computation:
    • All processes work simultaneously on their assigned portion
    • Monte Carlo: Generate random points and test if inside circle
    • Riemann: Calculate rectangle areas for assigned x-intervals
    • Each process maintains its own partial result
  3. Phase 3 - Result Aggregation (Reduce):
    • Master process collects partial results from all workers
    • Combines all partial results to get final π estimate
    • Displays final calculated value and accuracy

📊 Understanding the Visualization

  • Process Grid: Shows all MPI processes with their current status and workload
  • Process Colors: Each process has unique color matching its contribution in visualizations
  • Monte Carlo Visualization:
    • Quarter circle with unit square boundary
    • Green dots = points inside circle (count towards π)
    • Red dots = points outside circle
    • Real-time ratio updates as points are added
  • Riemann Sum Visualization:
    • Graph showing quarter circle curve y = √(1-x²)
    • Colored rectangles approximating area under curve
    • Each process color corresponds to its assigned x-intervals
    • Rectangle heights calculated from circle equation
  • Process States:
    • Idle: Waiting for work assignment
    • Computing: Actively calculating assigned work
    • Finished: Completed computation and sent results

🔬 Mathematical Background

  • Monte Carlo Principle:
    • Circle equation: x² + y² = 1 (unit circle)
    • Quarter circle area = π/4
    • Random point (x,y) is inside if x² + y² ≤ 1
    • π = 4 × (inside_count / total_count)
  • Riemann Sum Principle:
    • Quarter circle function: f(x) = √(1-x²) for 0 ≤ x ≤ 1
    • Area under curve = ∫₀¹ √(1-x²) dx = π/4
    • Approximation: Σ f(xᵢ) × Δx where Δx = 1/n
    • π ≈ 4 × Σ √(1-xᵢ²) × (1/n)
  • Accuracy Factors:
    • Monte Carlo: Statistical method - accuracy improves with √n
    • Riemann Sum: Deterministic method - accuracy improves with n
    • Both methods converge to true π value with sufficient precision

⚡ Parallel Computing Benefits

  • Linear Scalability: Both algorithms are "embarrassingly parallel" - perfect for MPI
  • Load Balancing: Work divided equally among processes (points_per_process = total/num_processes)
  • Minimal Communication: Only initial scatter and final reduce operations
  • Independent Computation: Processes don't need to communicate during calculation phase
  • Speedup Potential: Near-linear speedup possible with sufficient work per process
  • Real-world Applications: Same patterns used in weather simulation, financial modeling, etc.

🎯 Recommended Experiments

  1. Method Comparison:
    • Run both Monte Carlo and Riemann Sum with same precision (10K)
    • Compare accuracy and visual differences
    • Observe how each method approximates π
  2. Precision Impact:
    • Start with 1K precision, gradually increase to 1M
    • Watch accuracy improve with higher precision
    • Note computational time differences
  3. Parallel Efficiency:
    • Test same problem with 2, 4, 8, and 16 processes
    • Observe communication overhead vs computation time
    • Find optimal process count for different precision levels
  4. Statistical Analysis:
    • Run multiple Monte Carlo simulations with same parameters
    • Compare variation in results (random nature)
    • Notice Riemann Sum gives consistent results
  5. Visual Learning:
    • Use slow animation speed (0.5x) to watch point placement
    • Observe pattern formation in both visualizations
    • See how process colors distribute work visually

💾 MPI Code Download

  • Complete Implementation: Working MPI C code for both calculation methods
  • Educational Structure: Clear separation of scatter, compute, and reduce phases
  • Performance Timing: Built-in timing functions to measure parallel efficiency
  • Random Number Generation: Proper parallel random number handling
  • Error Handling: Robust MPI error checking and cleanup
  • Compilation Guide: Instructions for mpicc compilation and mpirun execution

🔍 Key Learning Outcomes

  • Parallel Algorithm Design: How to structure problems for parallel execution
  • MPI Communication Patterns: Scatter-compute-reduce paradigm
  • Load Balancing: Importance of equal work distribution
  • Numerical Methods: Different approaches to same mathematical problem
  • Accuracy vs Performance: Trade-offs between precision and computation time
  • Statistical vs Deterministic: Understanding different computational approaches
  • Scalability Analysis: How performance changes with more processes

❓ Frequently Asked Questions

  • Q: Why do Monte Carlo results vary each run? A: Random sampling means different points each time
  • Q: Which method is more accurate? A: Riemann Sum is deterministic; Monte Carlo converges statistically
  • Q: Why use parallel computing for π? A: Educational example of perfectly parallel algorithms
  • Q: How does this relate to real supercomputing? A: Same MPI patterns used in weather, physics, and engineering simulations
  • Q: What's the optimal number of processes? A: Depends on problem size - more work per process = better efficiency
  • Q: Why are some visualizations limited? A: Performance reasons - showing every point would slow the browser