Research Paper

Accurate Greeks for Autocallables

Production-Ready AAD with Smoothing: 90% reduction in computational cost for autocallable Greeks using Automatic Adjoint Differentiation and mathematical smoothing

90% compute cost reduction
1M paths = 10M path accuracy
Open source C++ implementation
Production-ready methodology

WBS Quantitative Finance Conference, Palermo • September 2025

View Presentation Slides →

Introduction

At the 21st annual WBS Quantitative Finance Conference in Palermo (September 2025), several presentations focused on computing autocallable Greeks, highlighting the active research in this space. MatLogica contributed a practical, production-ready implementation with concrete cost savings, including a complete open-source methodology.

Conference discussions validated the urgent need for robust solutions for the significantly sized ($104B) autocallable market, with participants ranging from major banks, specialized boutiques, and hedge funds, all wrestling with the same Greeks calculation challenges.

The Problem in a Nutshell

Despite AAD adoption by ~20% of Tier 1 banks, the debate over computing autocallable Greeks remains unsettled. A vocal contingent argues that "AAD doesn't work for discontinuous payoffs," while others persist with expensive bump-and-revalue approaches.

Reality check

Bump-and-revalue leaves you stuck with 10M+ paths for stable correlation Greeks, noisy results OTM, and uncertain bump sizes introducing error rates of up to 10%. Moreover, this is for a $104B annual market comprising 66% of structured products issuance!

The issue with AAD (on its own)

Pathwise AAD requires continuous payoffs, and standard application to digitals yields zero gradients almost everywhere. However, dismissing AAD entirely would mean losing the methodology's significant efficiency gains.

AAD without smoothing produces incorrect results for discontinuous payoffs

AAD without smoothing produces incorrect results for discontinuous payoffs

The Solution: Smoothing + AAD

The key insight is that you don't need to work with the discontinuous payoff directly. Replace Heaviside functions with smooth sigmoid approximations that preserve contract economics.

Smoothing Function

contLess(a, b, h) =
  h = 0:  1{a < b}
  h > 0:  ½(((b-a)/(0.02h))/√(1+((b-a)/(0.02h))²) + 1)

Properties that matter

  • Area under the smoothed curve ≈ original digital (quantifiable price difference)
  • Enabling meaningful gradients for AAD
  • Tunable parameter h for accuracy/stability tradeoff
  • Provable convergence as h → 0

Implementation Approach

Three methods benchmarked on Phoenix Autocallable:

Base No smoothing + finite differences (typical current approach)
Smoothed Smoothing + finite differences
AADC Smoothing + AAD (our solution)
github.com/matlogica/QuantBench/tree/main/2AssetAutoCallable

Benchmark Results

Phoenix Autocallable test case:

Key Findings:

  • 1M paths with AADC ≈ 10M paths traditional (90% reduction)
  • Stable correlation Greeks at 1M paths (previously required 10M+)
  • Orders of magnitude improvement in compute efficiency
  • Validated convergence across spot levels and parameters

Smoothed payoffs with AAD deliver the best accuracy-per-cost ratio. The correlation Greeks that required 10M paths with bump-and-revalue become tractable at 1M.

Delta Ladder Comparison

Delta ladder showing stable Greeks across spot levels with AADC method

Addressing the Obvious Questions

How do you know the smoothed payoff converges to the correct price?

The area difference between smoothed and actual digital is quantifiable. Convolute with density for measurable price deviation. In practice, relative error is small for material payoffs and negligible when digital value itself is small (i.e., when it doesn't matter).

What about higher-order Greeks?

Gamma via bump-and-revalue of AAD delta (far more stable than double-bumping price). Smoothing ensures finite differences remain well-behaved for higher orders.

Multi-asset scaling?

Rotation of coordinates technique (Rakhmonov & Rakhmonov, 2019) for smooth worst-of sampling. Demonstrated for 2 assets, the principle scales naturally to N underlyings.

Business Impact

  • For a $1B autocallable book:
  • 1% hedging accuracy improvement = $10M saved (industry mispricing averages 2.5%-4% of notional)
  • 90% compute cost reduction = millions in infrastructure savings
  • Faster Greeks = better risk response in volatile markets
  • Infrastructure reality: Banks run tens of thousands of servers for Monte Carlo risk. One implementation reported calculations "that took hours or overnight now complete hundreds of times faster."

Open Source & Reproducibility

Everything available for validation:

Repository

  • Complete C++ implementation
  • Three calculation methods
  • Comprehensive unit tests
  • Benchmark infrastructure
  • Pre-computed results
View Repository →

Interactive Visualizer

  • Smoothing parameters vs. path counts
  • Ladder plots across spot levels
  • Convergence analysis
  • CPU time vs. accuracy tradeoffs
Open Visualizer →

Technical Details

Implementation uses

  • Rotation of coordinates for smooth worst-of calculation
  • Parameterized smoothing with tunable accuracy
  • Production-ready C++ with comprehensive testing
  • Extensible architecture for additional payoffs

Planned extensions (contributions welcome)

  • Quasi-Monte Carlo with Sobol + Brownian Bridge
  • Stochastic volatility models (Heston, SABR)
  • Additional non-smooth payoffs (barriers, range accruals)
  • GPU acceleration

Get Started

Try it yourself: Clone repo, run benchmarks, validate against your implementations. Contribute: Pull requests welcome for extensions, new test cases, or methodology improvements.