Simulation 1: Neural Architecture Pipeline

Select a base model and observe the feature extraction pipeline. Watch how high-level features are distilled from raw pixels.

Total Layers 22
Frozen 16
Trainable 6
Total Params 20.2M
Trainable Params 0.2M
Non-Trainable 20.0M
Expected Accuracy
Prediction

Simulation 2: Feature Extraction – Hierarchical Learning

Explore what a CNN learns at each layer — from low-level edges to high-level flower shapes. Compare random, pretrained, and fine-tuned representations.

Input Image
Input
Edge Detection
Edges
Gradient Blobs
Blobs
Texture Patterns
Textures
Object Parts
Parts
Shape Recognition
Shapes
Prediction
Feature Quality:
Select a mode and run the extraction to see how the CNN processes the image.

Simulation 3: Freezing vs Fine-Tuning – Training Dynamics

Adjust freezing, learning rate, and dataset size to see how they affect training and validation curves. Watch for overfitting!

Best Val Acc
Lowest Loss
Convergence
Overfit Risk

Train Accuracy

Validation Accuracy

Train Loss

Validation Loss

Overfitting Indicator:
None

Simulation 4: Representation Space – Embedding Geometry

Watch how flower embeddings evolve from chaotic blobs to tight, separable clusters as the model trains. Compare random, pretrained, and fine-tuned representations.

Separability
Intra-Cluster Spread
Inter-Cluster Gap

Manifold Learning

Wait for animation or select a mode to explore conceptual insights.

Clustering Geometry

Higher separability indicates the model has learned distinct features for each class, enabling linear separation in high-dimensional space.

Simulation 5: Gradient Flow – Stability Explorer

See how gradients flow through frozen vs trainable layers. Understand why low learning rates prevent catastrophic forgetting in deep networks.

Max Grad Norm
Avg Update Size
Stability Score
Forgetting Risk

Layer-wise Gradient Magnitude

Layer-wise Weight Changes

Gradient Flow

Gradient flow determines how much each layer learns. If gradients are too small (vanishing) or too large (exploding), training becomes unstable.

Weight Dynamics

Large weight changes in pretrained layers can lead to "Catastrophic Forgetting," where the model loses its general feature extraction capabilities.

Stability Meter

Stable

Catastrophic Forgetting Risk: Low

Simulation 6: Domain Similarity – Strategy Decision Lab

Explore when transfer learning works best. Adjust domain similarity, data size, and compute budget to see real-time strategy recommendations.

Domain Scenario

🖼️
Source: ImageNet (1000 classes)
🏥
Target: Medical (X-Ray)

Adjust the sliders to see a personalized transfer learning strategy recommendation.

Strategy Matrix

Scrap &
Retrain
Fine Tune
Static
Extractor
Hybrid
Tuning
Optimal

Performance Radar