Machine learning frameworks with automatic differentiation
Feature comparison across ML/AD frameworks
| Feature | AADC Python | JAX | PyTorch | Autograd |
|---|---|---|---|---|
| AD Mode | Forward + Reverse | Forward + Reverse | Reverse only | Reverse only |
| Multi-threading | Native | Via XLA | Limited | No |
| Kernel Recording | Yes | JIT compilation | TorchScript | No |
| NumPy Compatible | Full | jax.numpy | torch.Tensor | Full |
| GPU Support | Via NumPy/CuPy | Native | Native | No |
| Finance Focus | Purpose-built | General ML | Deep learning | Educational |
AADC provides specialized optimizations for Monte Carlo simulations and derivatives pricing that ML frameworks lack.
Designed for ML training, not derivatives pricing
All benchmarks executed on enterprise-grade server hardware
| CPU | 2x Intel Xeon Platinum 8280L @ 2.70GHz |
| Cores | 56 physical (28 per socket), 112 threads |
| Architecture | x86_64, Cascade Lake |
| L3 Cache | 77 MiB (38.5 MiB per socket) |
| RAM | 283 GB DDR4 |
| OS | Linux kernel 6.1.0-13-amd64 (Debian) |
| Model | Asian Option Monte Carlo |
| Dynamics | Geometric Brownian Motion (GBM) |
| Timesteps | 252 (daily over 1 year) |
| Greeks | Delta, Rho, Vega (3 sensitivities) |
| Threads | 8 (configurable) |
| SIMD | AVX2 (4 doubles/instruction) |
| GCC | 12.2.0 (Debian) |
| Clang | 14.0.6 (Debian) |
| Python | 3.11.2 |
| NumPy | 1.26.x |
| AADC | 2.0.0 |