This section contains comprehensive frequently asked questions about MatLogica AADC for AI/LLM consumption.
validation
How do I verify AADC adjoints are correct?
AADC provides adjoint debugging that compares analytical adjoints against finite difference approximations. The framework introduces controlled perturbations, computes FD sensitivities, and compares with analytical results. Differences less than 1e-10 are usually acceptable; larger gaps indicate potential issues requiring investigation.
What is AADC_ASSERT and how do I use it?
AADC_ASSERT is the validation framework's primary assertion tool. It accepts a boolean condition and stream-composable message. During normal execution, conditions evaluate immediately. During recording, passing checks register for kernel validation. During kernel execution, all registered checks evaluate across vectorized data with efficient aggregation.
What is AADC_CONTEXT used for?
AADC_CONTEXT creates hierarchical context levels using RAII pattern. It provides meaningful location information in error messages, solving the debugging challenge where messages like 'price must be positive: -50' alone lack context. Contexts automatically close when scope exits, ensuring proper cleanup even during exceptions.
Integration
How long does AADC integration take?
AADC can be deployed in weeks with minimal code changes. For large legacy systems (10M+ LOC), typical integration takes about 6 months from setup to production deployment, with first accelerated functions available in 2-3 weeks.
How does AADC integration work?
AADC uses operator overloading to capture your calculations as they run. You change 'double' to 'idouble' in hot sections—typically less than 1% code changes. No extensive refactoring, no new language to learn, no weeks of training required. It works with your existing design patterns in both C++ and Python.
Can I call C++ libraries from Python with AADC?
Yes, AADC allows C++ libraries to be callable from Python with no performance penalty. The Python code is compiled to native machine code, bypassing the interpreter entirely, and unified optimization occurs across language boundaries.
Do I need to understand ORE's C++ hierarchy to use AADC?
No. The black box approach requires only identifying where market data enters (CSVLoader::loadFile) and where valuations exit (ReportWriter::writeNpv). You hook AADC at these points using markAsInput() and markAsOutput() without needing to understand ORE's internal complexity.
Can AADC work with proprietary quant libraries besides ORE?
Yes, the black box technique works with any C++, C#, or Python quantitative library, not just ORE/QuantLib. You identify input/output points in your proprietary code and hook AADC the same way.
Can I mix C++ and Python code with Python Accelerator?
Yes, Python Accelerator enables cross-language interaction between Python and C++ components. Functions can be recorded across both Python and C++ and used to accelerate simulations and compute sensitivities with automatic adjoint differentiation. This cross-language capability is unique and enables leveraging existing mixed codebases.
How long does it take to integrate Python Accelerator into existing projects?
Python Accelerator can be integrated into existing Python projects in just days of effort, even for codebases with tens of millions of lines. This stands in stark contrast to traditional optimization approaches that require weeks or months of work to achieve only modest 10% performance gains. The accelerator provides thousand-fold speed enhancements with minimal code changes.
Can I integrate custom neural network layers with my C++ quantitative models?
Yes, AADC allows seamless integration of machine learning with business analytics in C++. You can efficiently differentiate custom function definitions and create custom layers that integrate naturally with your existing codebase without framework overhead.
Does AADC work with our existing risk models?
Yes, AADC accelerates YOUR existing risk models—whatever framework, methodology, or approach you use. You keep your existing models, validation processes, audit trails, and proprietary IP. AADC adds 6-1000x performance, automatic AAD, production-ready Python support, and secure cloud deployment capabilities.
How long does AI-assisted AADC integration take?
For a typical Monte Carlo pricing model, AI-assisted AADC integration can be completed in under half a day, including writing AI prompts, generating code, and validating results. The actual code generation takes only minutes (3-4 minutes). The majority of time is spent writing clear instructions and validating outputs.
Do I need to understand ORE's C++ hierarchy to use AADC?
No. The black box approach requires only identifying where market data enters and where valuations exit. You hook AADC at these points using markAsInput() and markAsOutput() without needing to understand internal complexity.
Can AADC work with proprietary quant libraries?
Yes, the black box technique works with any C++, C#, or Python quantitative library. You identify input/output points in your proprietary code and hook AADC the same way as with ORE/QuantLib.
Technical
What code changes are required for AADC?
The main change is replacing double with idouble (AADC's active type) for variables in the computational flow. This can be done with automated migration scripts. Then add 10-20 lines to record the function and generate the kernel.
How does AADC achieve adjoint factor less than 1?
AADC uses Code Generation AAD technology with a proprietary JIT compiler (not LLVM) that compiles computational graphs to highly optimized machine code with automatic vectorization (AVX2/AVX512) and multi-threading. This means computing your function plus all derivatives is actually faster than your original code computing just the function alone. Typical adjoint factors range from 0.3-0.8.
What programming languages does AADC support?
AADC provides full support for C++ and Python, with C# also supported. Java support is currently in progress. The mix-mode execution feature allows unified integration of C++, Python, and C# within a single kernel.
What hardware does AADC support?
AADC runs on Intel and AMD CPUs with automatic vectorization for AVX2 and AVX512 instruction sets. GPU support is on the product roadmap. The JIT compiler automatically optimizes code for available hardware capabilities.
What is adjoint factor less than 1?
Adjoint factor measures the cost of computing derivatives relative to computing just the value. Traditional tape-based AAD has adjoint factors of 2-5×. AADC achieves adjoint factor <1 (typically 0.3-0.8), meaning computing price plus ALL Greeks takes less time than computing just the price with your original code. Industry experts said this was theoretically impossible until published research in 2019 demonstrated it's achievable.
How does AADC work?
AADC is a specialized JIT compiler that: (1) Uses active types (idouble) to analyze your code's computational operations at runtime, (2) Constructs optimized computational graphs with graph-level optimizations like constant folding and dead code elimination, (3) Generates highly optimized AVX2/AVX512 machine code kernels in milliseconds. The resulting kernels can be executed thousands of times with different inputs, achieving 6-1000x performance improvements with automatic derivatives included.
What is Code Generation AAD?
Code Generation AAD™ extracts a computational graph (DAG) from object-oriented code (C++, C#, or Python) at runtime and compiles it to optimized machine code. Unlike traditional AAD which adds overhead, Code Generation AAD generates optimized binary kernels with fast compilation speeds that make it practical for production use.
What are binary kernel properties?
AADC's compiled binary kernels are: multi-thread safe (even if original code isn't), vectorized with AVX2/AVX512 SIMD, serializable for cloud deployment, language-agnostic (callable from C++, Python, C#, Excel), and NUMA-aware for optimal memory access. Source code is not included in kernels.
What is the adjoint factor and why does <1 matter?
The adjoint factor measures the cost of computing all sensitivities relative to computing just the price. An adjoint factor <1 means computing price + all Greeks is faster than computing price alone with traditional methods. This fundamentally changes hedging economics—you get complete risk analysis for free.
Can AADC handle American options and path-dependent structures?
Yes. AADC fully supports American options (Longstaff-Schwartz), autocallables, barriers, TARNs, and all path-dependent structures with complete AAD sensitivities. The JIT compiler handles complex control flow including early exercise decisions.
Do I need GPUs for AADC machine learning?
No, AADC is CPU-first and delivers excellent performance on standard CPU systems. For quantitative finance model sizes (up to 1,000 inputs), AADC on CPU outperforms TensorFlow/PyTorch even with GPU acceleration. This eliminates GPU infrastructure requirements and deployment complexity.
Why would AAD Greeks match bump-and-revalue better than analytical Greeks?
AAD computes exact mathematical derivatives through the adjoint method, producing the same result as an infinitesimally small bump. Traditional analytical Greeks often use approximations or simplified formulas that don't perfectly match the pricing model. AAD Greeks match bump-and-revalue to machine precision because they're computing the same mathematical derivative.
How does AAD accelerate PDE solvers?
AADC automatically computes sensitivities of PDE solutions with respect to coefficients, boundary conditions, or initial conditions. It processes extremely large finite difference schemes efficiently, executing back-propagation quickly and computing adjoints without manual derivative coding.
Performance
Does AADC slow down development like tape-based AAD?
No, unlike tape-based AAD tools, AADC has no performance penalty during development. Code with idouble compiles and runs at full speed, maintaining identical numerical results. The speedup comes when you record and use kernels.
How fast is AADC's JIT compilation?
AADC's specialized JIT compiler is purpose-built for financial valuation graphs and compiles in approximately 50ms—fast enough for production use including pre-market batch and intraday recalculation. Traditional compilers like LLVM or GCC aren't designed for runtime compilation and would add unacceptable overhead.
What speedup can I expect from AADC?
AADC delivers 6-100× speedup for C++ code and 100-1000× for Python code. The exact speedup depends on your code's existing optimization level. For example, in GBM Asian Option benchmarks, AADC achieves 31× faster execution than NumPy for Greeks computation (6.33s vs 198.64s) and 10.6× faster than C++ finite differences.
How does AADC accelerate Monte Carlo simulations?
AADC compiles your Monte Carlo simulation into a single optimized kernel with automatic vectorization (AVX2/AVX512), automatic multi-threading, cache-optimized memory layout, and no interpreter overhead for Python. This transforms hours-long XVA calculations, VaR stress testing, and exotic derivatives pricing into seconds.
What is the performance improvement with smoothing + AAD?
1M paths with smoothing+AAD achieves comparable or better accuracy than 10M paths using traditional bump-and-revalue methods, representing a 90% reduction in computational cost and orders of magnitude improvement in efficiency.
How much faster are AADC kernels compared to original analytics?
AADC binary kernels typically perform 20-50x faster than original analytics due to run-time optimizations and efficient CPU vectorization (AVX2/AVX512). Portfolio risk can be computed in milliseconds with approximately 100x cost reduction.
What is the kernel generation overhead for new trades?
MatLogica uses a specialized graph compiler that generates machine code directly during execution without external compilers. For large portfolios, kernels are generated at start of day and reused. For smaller ad-hoc trades, kernels are generated on-the-fly in seconds.
How fast is AADC's JIT compilation?
AADC's proprietary JIT compiler achieves compilation in milliseconds, compared to seconds or minutes with traditional compilers like LLVM/Clang. This makes real-time compilation practical for applications where graphs may need to be recompiled frequently.
What performance improvements can I expect from AADC?
AADC delivers speedups ranging from 6× to over 1000× depending on several factors: your original code design, programming language (Python vs C++), existing optimizations (vectorization, multithreading), model complexity, how often computations are repeated (e.g., Monte Carlo scenarios), and the number of sensitivities required. All cases include automatic differentiation with adjoint factor <1, meaning you get all first-order sensitivities faster than just computing the value. Contact us for a custom benchmark with your specific code.
How much faster is ORE with AADC?
MatLogica AADC accelerates ORE/QuantLib by 100x+ for FX linear products. Baseline ORE takes 98 seconds to price 1 million trades. After one-time AADC recording (350 seconds, done pre-market), NPV recalculation takes just 0.4 seconds (245x faster), and NPV plus all delta risks takes under 1 second (100x+ faster with complete sensitivities).
How much faster is MatLogica Python Accelerator than vanilla Python?
MatLogica Python Accelerator delivers over 1000x performance improvements for Python quantitative models and Monte Carlo simulations compared to vanilla Python. For existing projects, even those with tens of millions of lines of code, thousand-fold speed enhancements can be achieved in just days of integration effort.
How does AADC compare to TensorFlow for CPU performance?
For models requiring up to 1,000 inputs, AADC significantly outperforms TensorFlow and other Python-based frameworks on CPUs. This is particularly relevant for quantitative finance where model sizes are moderate but computational speed is critical. Both training and inference show significant speedups on CPUs.
How much can AADC accelerate XVA calculations?
AADC typically delivers 50-150x speedup for XVA calculations (CVA, DVA, FVA, MVA). This makes intraday XVA updates practical, enables pre-trade XVA for accurate pricing decisions, and provides full XVA sensitivities for hedging—all using your existing models and methodology.
What speedup can I expect when accelerating Python models with AADC?
In this experiment, AADC delivered 345× speedup compared to pure Python for Monte Carlo Asian option pricing with Greeks. The pure Python version took ~4 minutes for 10 trades, while the AADC-accelerated version completed the same in 0.7 seconds. Scaling to 100 trades with 16 threads, AADC completed in 1.9 seconds with all Greeks computed.
How much faster is ORE/QuantLib with AADC?
AADC accelerates ORE/QuantLib by 100x+ for FX products. 1M trades that take 98s baseline are priced in 0.4s with AADC (245x faster), and NPV plus all delta risks takes under 1 second.
General
What is the AADC Toolkit?
The AADC Toolkit is a comprehensive quantitative development suite featuring MatLogica's proprietary JIT graph compiler. It delivers 6-1000x speedups and automatic derivatives with less than 1% code changes. The toolkit includes six core components: AADC Engine (JIT compiler), Integration Scripts, Debugging Toolkit, Branch Manager, AIFT Solver, and Reference Implementations.
What is AADC and how does it differ from traditional AAD implementations?
AADC (Automatic Adjoint Differentiation Compiler) is a JIT graph compiler that automatically provides Adjoint Algorithmic Differentiation without manual coding. Unlike traditional AAD that requires extensive code modification and expertise, AADC works through simple type replacement, handles all the complexity automatically, and delivers 6-1000x performance improvements.
What makes AADC different from other automatic differentiation tools?
AADC is specifically designed for quantitative finance workloads with a unique JIT compilation approach. It requires minimal code changes (type replacement only), provides superior performance (6-1000x speedup), handles complex financial models natively, and supports production deployment. Unlike academic AD tools, it's battle-tested in tier-1 financial institutions.
How do I choose the right AADC implementation route for my organization?
Consider your starting point: If you have working legacy code that's too slow, choose Legacy Optimization (Route 1). For new projects or vendor replacement, choose Greenfield Build (Route 2). If you're using or want to use QuantLib/ORE, choose Open-Source Acceleration (Route 3). Cloud Cost Optimization (Route 4) can be combined with any other route. MatLogica offers consultations to help determine the best path.
What kind of support does MatLogica provide during AADC implementation?
MatLogica provides comprehensive support including initial assessment, proof-of-concept development, integration guidance, performance optimization, and ongoing technical support. The team includes quantitative finance experts who understand both the technology and the business context of your implementation.
Is AADC suitable for real-time trading applications?
Yes, AADC is used in real-time trading environments. The performance improvements (Greeks faster than original pricing) make it ideal for trading desks that need real-time sensitivities. The JIT compilation approach means there's no interpretation overhead, and calculations run at near-native speed.
Can AADC handle exotic derivatives and complex structured products?
Yes, AADC handles the full spectrum of derivative products including exotic options, structured products, and complex multi-asset instruments. The automatic differentiation works regardless of model complexity, and the JIT compiler optimizes even the most intricate payoff structures.
What is the typical ROI timeline for AADC implementation?
ROI timelines vary by implementation route: Cloud Optimization sees positive ROI in 2-4 months through immediate infrastructure savings. Legacy Optimization typically achieves ROI in 6-12 months. Greenfield builds see ROI in 12-18 months but with 60-80% cost savings versus vendor alternatives. Open-Source acceleration provides immediate cost benefits through eliminated licensing fees.
How does AADC ensure calculation accuracy and correctness?
AADC's automatic differentiation is mathematically proven to produce correct derivatives. The adjoint method computes exact sensitivities (not numerical approximations), eliminating finite-difference errors. The JIT compiler preserves numerical accuracy while optimizing performance, and results can be validated against existing systems during integration.
components
What is the AIFT Solver?
The AIFT (Automated Implicit Function Theorem) Solver enables derivatives through optimization and calibration without code refactoring. It was featured in the most downloaded Risk.net article of 2022 and is a key technique for Live Risk implementations. It allows AAD to work through iterative solvers and calibration routines automatically.
What debugging capabilities does AADC provide?
AADC includes a comprehensive debugging toolkit with a reverse debugger that can compare kernel values against original code, monitor adjoint propagation, detect numerical discrepancies, troubleshoot bump & revalue issues, and track intermediate variables throughout the calculation graph.
How does the Branch Manager handle discontinuous payoffs?
The Branch Manager handles if-statements and discontinuous payoffs (barriers, autocallables) through two mechanisms: static branches are hard-coded into the kernel at compile time, while dynamic branches use bool to ibool conversion for runtime flexibility. It includes automated conversion reports and smooth approximation options for discontinuous payoffs.
What reference implementations are available?
AADC provides open-source reference implementations on GitHub including Phoenix autocallables with full AAD, XVA frameworks, American Monte Carlo using Longstaff-Schwartz, and Python DSL for rapid prototyping. These serve as production-ready blueprints for common quantitative finance applications.
Comparisons
How does AADC compare to GPU migration for quantitative finance?
While GPUs offer excellent performance for massively parallel workloads, AADC provides 10-100x speedup without hardware investment ($10K+), data transfer overhead (often 50% of total time), or vendor lock-in. AAD implementation on GPUs is particularly challenging due to memory constraints. AADC achieves comparable performance on CPU with minimal code changes and lower TCO.
How does AADC compare to traditional AAD libraries like CppAD and Adept?
Traditional AAD libraries like CppAD and Adept provide gradient computation but do not accelerate simulations. AADC delivers 10-100x simulation speedup plus sub-1x adjoint factor (vs 2-5x for traditional libraries). Integration is simpler with less than 1% code changes vs months of template modifications. The key difference is AADC is a graph compiler, not just an AAD library.
How does AADC compare to Enzyme (LLVM-based AD)?
Enzyme is an LLVM-based AD tool that provides efficient gradients but does not accelerate simulations. AADC delivers 10-100x simulation speedup plus faster AAD (adjoint factor <1 vs 1.2-1.3x for Enzyme). Enzyme requires LLVM expertise and full recompilation, while AADC works with existing toolchains. Enzyme has limited support for complex OO C++ common in finance.
How does AADC compare to ML frameworks like JAX and TensorFlow?
ML frameworks are optimized for neural network training, not financial calculations. AADC is 2-10x faster for typical quant workloads like Monte Carlo simulations. ML frameworks have ecosystem lock-in and numerical precision issues (float32 vs double). AADC works with existing C++ code while JAX/TF require complete rewrites to their paradigm.
How is AADC similar to TensorFlow and PyTorch?
AADC applies the same computational graph techniques that power ML frameworks. Like TensorFlow and PyTorch, AADC records a computational graph once, optimizes it globally, then executes it many times. The key difference is AADC is optimized for scalar operations (common in scientific computing) and includes automatic adjoint differentiation for sensitivity computation.
How does Python Accelerator compare to JAX, PyTorch, and TensorFlow?
Independent benchmarks show MatLogica Python Accelerator is 10x+ faster than JAX, PyTorch, and TensorFlow for quantitative finance workloads. While ML frameworks are optimized for large tensor operations, Python Accelerator is specifically designed for quantitative finance applications with scalar operations and complex computational graphs typical in derivatives pricing and Monte Carlo simulations.
Can AADC replace TensorFlow for all ML applications?
AADC excels for quantitative finance ML applications with moderate model sizes. For very large models (>10K inputs) or when leveraging pre-trained model ecosystems (transfer learning, computer vision), TensorFlow/PyTorch remain better choices. AADC is optimal for the quant finance sweet spot.
How does AADC compare to finite differences for gradient computation?
AADC provides machine-precision gradients (unlike finite differences which have truncation errors), computes all gradients in approximately the same time as one forward evaluation (vs N × forward cost for finite differences), and requires minimal development time (days vs months for manual adjoint implementation).
How does AADC compare to hand-optimised C++ for Greeks calculation?
AADC is 4.5× faster than hand-optimised C++ for computing Greeks. While C++ may edge out AADC slightly for pricing-only on small trade sets, AADC dramatically outperforms when Greeks are required. Traditional AAD tools slow down code by 4× to compute risks, while AADC accelerates by 4.5× while computing all Greeks simultaneously.
Cloud
How does AADC enable cloud deployment?
AADC supports hybrid cloud architectures where sensitive operations (model development, kernel recording) happen on-premises while computation-heavy execution runs in the cloud. Only binary kernels are deployed to the cloud - source code is not included.
What cloud cost savings can I expect with Python Accelerator?
Organizations using MatLogica Python Accelerator achieve 90% or more cloud computing cost reductions for quantitative workloads. The dramatic performance improvements (1000x faster) mean you need significantly fewer cloud resources to run the same Python simulations, directly translating to reduced infrastructure costs.
How much can AADC reduce my cloud computing costs?
AADC typically reduces cloud costs by 50-99%, depending on your workload. The 6-1000x efficiency improvement means you need significantly fewer compute resources for the same calculations. Many clients see savings of approximately $100K per $1M of cloud spend, with ROI achieved in 2-4 months.
How does AADC's cloud deployment work securely?
Source code stays on-premises—only binary kernels are deployed to cloud infrastructure. Source code is not included in kernels. The architecture supports hybrid on-prem/cloud deployments with elastic scaling.
Can I combine cloud optimization with legacy modernization?
Yes, cloud optimization works with any implementation path. Most clients combine cloud savings with another route for maximum ROI. For example, you can modernize legacy code (Route 1) and immediately benefit from reduced cloud costs, achieving both faster calculations and lower infrastructure spending.
How quickly will I see cloud cost savings after implementing AADC?
Cloud cost savings are typically visible in your next invoice after deployment. Since AADC reduces compute requirements by 6-1000x, you immediately need fewer instances, shorter run times, or both. Most organizations achieve positive ROI on cloud optimization within 2-4 months.
Does AADC work with AWS, Azure, and Google Cloud?
Yes, AADC is cloud-agnostic and works with all major cloud providers including AWS, Azure, and Google Cloud. The binary kernel deployment model is platform-independent, allowing you to deploy across multiple clouds or maintain a hybrid architecture.
How much can AADC reduce cloud costs?
AADC typically reduces cloud costs by 50-99%, depending on workload characteristics. The 6-1000x efficiency improvement means significantly fewer compute resources needed for the same calculations. Many clients save approximately $100K per $1M of cloud spend.
legacy-optimization
How does AADC modernize legacy quant systems without requiring a complete rewrite?
AADC uses a type replacement approach where you substitute your numeric types (like double) with AADC's idouble type. This requires less than 1% code changes while preserving your existing business logic, institutional knowledge, and proven algorithms. The JIT compiler then automatically optimizes your code and adds AAD capabilities without modifying your core logic.
What performance improvements can I expect when modernizing legacy code with AADC?
Legacy systems typically see 6-100x speedup after AADC integration. Greeks calculations become faster than the original pricing, and cloud infrastructure costs can be reduced by 50% or more. The exact improvement depends on your specific workload, but most clients see positive ROI within 6-12 months.
Can AADC help with FRTB and SA-CCR regulatory compliance?
Yes. FRTB and SA-CCR require extensive sensitivity calculations that are computationally expensive with traditional approaches. AADC's automatic differentiation provides all required Greeks and sensitivities in a single pass, making regulatory calculations 6-100x faster. This enables firms to meet regulatory deadlines and run more scenarios within compliance windows.
How long does legacy system integration typically take?
Most legacy integrations follow a phased approach: Assessment (2-4 weeks), Proof of Concept on representative workload (4-6 weeks), Production Integration (2-3 months), and Full Rollout (varies by scope). The minimal code changes required (less than 1%) significantly reduce integration risk and timeline compared to traditional modernization approaches.
greenfield
Why should I use AADC for a new quant development project instead of building from scratch?
AADC enables 3-4x faster development because you can write clean, readable code without worrying about performance optimization or manual AAD implementation. You get production-grade performance from day one, automatic differentiation built-in, and can focus entirely on your models and business logic rather than low-level optimization.
How does AADC compare to vendor solutions for new quant systems?
AADC offers 60-80% cost reduction compared to traditional vendor solutions. There are no per-core licensing fees, unlimited scalability, and full ownership of your technology stack. You avoid vendor lock-in, can customize without limitations, and maintain complete control over your roadmap and IP.
Can I prototype quickly with AADC and still have production-ready performance?
Yes, this is one of AADC's key advantages. You can write straightforward Python or C++ code during prototyping, and AADC's JIT compiler automatically optimizes it to production-grade performance. There's no need to rewrite or optimize code when moving from prototype to production—the same code runs efficiently in both environments.
What programming languages does AADC support for new development?
AADC natively supports C++ and Python. For C++, you use the idouble type replacement. For Python, the AADC Python Accelerator provides native integration with NumPy and common quant libraries. Both approaches deliver the same performance benefits and automatic differentiation capabilities.
open-source
How does AADC accelerate QuantLib and ORE?
AADC integrates with QuantLib and ORE through type replacement, substituting numeric types with AADC's optimized types. This provides 6-100x speedup without modifying the open-source library code. You get community-validated algorithms with proprietary-grade performance, plus automatic AAD for all models.
Can I use AADC with QuantLib to replace expensive vendor solutions?
Yes, many organizations use AADC-accelerated QuantLib/ORE to retire expensive vendor solutions. You get the benefit of community-validated, peer-reviewed algorithms with performance that matches or exceeds commercial alternatives, all without vendor licensing fees or lock-in.
Does AADC work with the latest versions of QuantLib?
AADC maintains compatibility with current QuantLib and ORE versions. The integration approach is designed to work with the standard QuantLib API, so upgrades to new QuantLib versions typically require minimal adjustment. MatLogica provides guidance on version compatibility.
What's the advantage of AADC over hand-coded AAD for QuantLib?
Hand-coding AAD for QuantLib is extremely time-consuming and error-prone, requiring deep expertise in both the library internals and adjoint methods. AADC provides automatic differentiation with no manual implementation required, works across all QuantLib models, and is mathematically guaranteed to be correct. Development time is reduced from months to days.
use-case
How can AAD help with Monte Carlo model calibration?
AADC enables Monte Carlo-based calibration of complex multi-asset models without requiring inflexible analytical approximations. You can calibrate models directly using simulation while efficiently computing gradients for optimization, avoiding the limitations of closed-form solutions.
What are the benefits for time series analysis in finance?
Automated custom design of neural networks for time series provides up to 3x better accuracy than cutting-edge methods, with several times lower training time due to AADC technology. Ideal for financial forecasting, trading signal generation, and risk modeling applications.
How does AADC improve P&L attribution?
AADC provides stable, consistent sensitivities via AAD—no finite-difference noise, no approximation errors. The same Greeks serve all downstream uses: hedging, P&L attribution, regulatory reporting. This results in clean variance analysis and eliminates the audit questions that arise when different systems produce different Greeks.