As stated by Antoine Savine, “The main challenge faced by global investment banks today is a computational one”, and computation of risks is a primary factor driving operational costs - whether via manual differentiation, automatic differentiation or Bump-and-Revalue. Organisations have to employ teams of expensive quants to enable and support their solution, while an average compute bill for a Tier 2 bank is in excess of $10M per year, so even cutting that by 10% is significant.
AAD is desired due to its numerical stability and good performance, so having a suitable tool is important to a bank’s competitive advantage. There are currently two commonly practised approaches for implementing AAD (tape-based and code transformation). A third approach is code generation, which is presently less well-known. Each methodology comes with advantages and disadvantages.
Types of automatic differentiation tools
In this post, we’ll discuss the main features and differences between these approaches and go over the advantages and challenges of each one.
Tape-based automatic differentiation tools
Operator Overloading (OO) is used to capture the elementary operations whilst executing the analytics, and all mathematical operations (the computational graph) are recorded on the data structure, known as ‘tape’. The tape is then processed backwards in order to compute all of the risks using the Adjoint Differentiation method.
Tape-based AAD tool
Due to the nature of the OO tape methodology, all computations are linearised, i.e. all the loops are unrolled. The size of the tape is dependent on the number of elementary operations in the original code. Accordingly, more memory is required to store the tape data structure - which is slow and expensive.
However, whilst the approach can work well for localised projects it does not scale out at all, and therefore does not present a feasible solution for a real-life quant software environment.
Source Code Transformation AAD approach
This generated code can process arbitrary inputs (random numbers or market rates) and is used in all subsequent iterations. It delivers maximised performance as it uses the information available at runtime and can apply additional optimisations.
A run-time code generation AAD™ tool might look like this:
Code Generation AAD approach
The compilation is done once for a task (trade/portfolio etc.) and doesn’t depend on the number of iterations. But, both the function and its adjoint need to be regenerated each time the task configuration changes, such as when pricing a new trade, a change in trading date, or amending the portfolio. Accordingly, for real-life quant and risk systems, the time taken to generate this code is part of the overall execution and therefore crucial for any overall performance gains.
In contrast to the tape-based approach, code generation AAD™ can speed up both the original function and its sensitivities. Therefore, second-order and scenario risk can be accelerated, unlike the tape-based AAD tool that can only accelerate 1st order risk. But, it is only beneficial when the number of iterations substantially outweighs the initial compilation time. Minimising the compilation time is thus the key to this solution, and using an off-the-shelf compiler, such as LLVM or C++, simply cannot deliver.
Code generation + operator overloading - the MatLogica AADC way
In addition to the fast compilation, the generated kernels are very quick as they are vectorized to native AVX2 or AVX512 architecture and therefore process 4 or 8 samples in parallel. They are also multithread-safe. This results in speed-ups of 100x including the compilation time!
AADC Just-In-Time Compilation
Comparison of AAD tools: performance, usability, memory
Thus, with very manageable changes to the quant libraries, speed-ups of 100x can be easily achieved on existing hardware and yield impressive cost savings on compute bills. Even organisations that already use some form of AAD will benefit from the AADC approach with speed-ups of 5-20x on a single core.
MatLogica AADC is a modern AAD tool that acts as an abstraction layer that delivers optimal performance and ease of use. Our clients get initial results in weeks, and once the integration is completed organisations will observe additional benefits such as reduced turnaround for new model development and better numerical stability.
The ability to compute risks faster also enables running more what-if scenarios and extra backtesting, enabling better decision-making. The resulting Live Risk capability permits new trading opportunities to be seized before the competition!