MatLogica AADC delivers the fastest Greeks computation for Python Monte Carlo, outperforming JAX, PyTorch, Autograd, XAD, and Enzyme-AD.
All benchmarks use 100 trades × 1,000 scenarios × 252 timesteps on identical hardware. AADC scales to 450x+ speedup with 16 threads.
Asian Option Monte Carlo - 100 trades × 1,000 scenarios × 252 timesteps
100 trades × 1,000 scenarios × 252 timesteps (single-threaded)
* PyTorch vectorized version; per-path AD takes 1361s. Times shown as compilation + execution.
AADC demonstrates near-linear scaling with thread count
AADC compilation time (45ms) is constant regardless of thread count. Execution time scales near-linearly, providing 4.5x additional speedup from 1→16 threads.
Thread scaling measured on Intel Core i9-12900K. Efficiency = (1-thread speedup × threads) / actual speedup × 100.
Lines of code changed and model rewrite requirements
JAX does not support native Python control flow inside JIT-compiled functions:
for loops must be replaced with jax.lax.fori_loopif/else must be replaced with jax.lax.condAsian Option Monte Carlo - 100 trades × 1,000 scenarios × 252 timesteps
Peak memory consumption during Greeks computation
Key insight: AADC uses 3-5x less memory than JIT-based alternatives (JAX, PyTorch, Enzyme-AD) while delivering faster performance.
Key factors for each AAD library
Production-grade performance with linear thread scaling
Loops require rewrite to jax.lax.fori_loop
Clang-only; requires JAX 0.4.30 (version locked)
Heavy compilation overhead; requires tensor rewrite
Last release 2022; effectively unmaintained
See how AADC can deliver 127-450x faster Greeks computation for your Python Monte Carlo models with minimal code changes.
All benchmarks executed on identical hardware for fair comparison