Blog Post

Guilt-free Live-Risk in the Cloud: a New AAD-powered Approach

Discover a target architecture for cloud-based Live Risk that uses Code Generation AAD™ to achieve fast and cheap computation of sensitivities, enabling guilt-free Live Risk


Costs, flexibility, and scalability are among the primary reasons encouraging banks to transition to the cloud. For banks, the ultimate goal is often to enable 'Live Risk'. However, the question looms as to whether this can in fact be achieved using a naïve "Lift-and-shift" approach to cloud migration.

There are two distinct approaches to genuinely achieve "Live-Risk" (discussed in detail in the video) :

  • "On-demand scaling": but spawning new instances takes huge amounts of time, and introduces latency in turbulent times; also, banks will compete for resources during market turbulence, leading to higher prices and/or shortage of resources;
  • "Always-on": pre-loaded portfolio, analytics, and scenarios -> no time wasted for provisioning. But this is costly; effectively throwing money at the problem!
In contrast, a cost-efficient and future-proof solution is to use a modern Code Generation AAD tool that enables fast generation of efficient binary kernels. These kernels can represent curve-building, portfolio pricing (the original model), and risk (the adjoint) and can be securely deployed on a cloud at the beginning of the trading day to be continuously used for tick-level risk throughout the day.

MatLogica’s Automatic Adjoint Differentiation Compiler(AADC) delivers precisely this.

In this article, we present a target architecture for cloud-based Live Risk that uses MatLogica’s AADC, and explains how risks can be computed faster and cheaper, enabling guilt-free Live Risk.

What is AADC?

Let’s use an example of a large portfolio that includes a variety of trades, in which several models of varying complexity are applied. The portfolio may also include curve-building functionality. The objective is to get tick-level pricing for this complex model.

Achieving this with an always-on “Lift & Shift" approach would be very expensive. It would also bring the additional risks of exposing proprietary models and trade data.

MatLogica AADC uses a different approach. During a single execution of the original analytics, AADC generates binary kernels representing the optimised and compressed raw elementary operations of the original program. Optionally, an extra kernel can also be generated, which computes the backward pass of automatic differentiation to calculate all risks. These kernels typically perform 20x-50x faster than the original analytics, and compute all risks in constant time - due to run-time optimisations and efficient use of native CPU vectorisation (AVX2/AVX512). The kernels are multi-thread safe by design, enabling extra scalability that is not available in the original analytics.

Remarkably , these kernels are almost Enigma-secure. AADC receives a sequence of elementary operations from the original analytics, with all the loops unrolled, and just hard-coded values that represent strike, expiry date, or valuation date. With no visibility of the original models, AADC generates optimised and compressed kernels, where all business-critical information is hidden between the ones and zeros. Accordingly, even the same portfolio of trades will have a different binary representation from one trading day to another.

Importantly for cloud execution, AADC kernels are generated for specific hardware, meaning that steps such as creating docker containers are not required and the kernel can be serialised immediately and sent for execution on the cloud at the start of a trading day.

MatLogica has developed a standardised, semi automated, approach to library instrumentation which generates first results much faster, in a much more controlled fashion than traditional AAD packages. Also, using MatLogica’s breakthrough work on automating the Implicit Function Theorem (IFT) enables fully automated differentiation of exact-fit calibration routines and provides an approximate solution to almost exact calibration problems (read more on application of the Automated IFT for Live Risk).

Target Architecture for Cloud-based Live Risk

There are several options available to a solutions architect, depending on the nature of the trading business, and the existing infrastructure and processes. For instance, one can choose to implement the AADC approach for all trades, including instruments that do not benefit from AADC. Alternatively, the valuation of such trades can be retained within the original analytics, where it gets aggregated with the measures computed by AADC kernels on the cloud, providing a complete view of the position and risks.

In the example below, we present a simplified diagram of a generalised solution architecture. In this instance, MatLogica AADC is part of the Quant Library (which is in reality made up of multiple components, APIs, and services) and is used to price all products. When a request to price a trade (or a portfolio) comes in, the Quant Library checks if the recording (kernel) for that configuration exists. If it does not exist, the library will instruct AADC to generate such a kernel and send it to the cloud. If it already exists, the library will call upon the kernel and aggregate the results as necessary. This might sound a bit like the “always-on” approach, but thanks to AADC, the computational resources required by the MatLogica kernels are far less than in a normal “lift-and-shift” “always-on” approach. Therefore, expensive cloud resources are not wasted.
The Quant Library needs to be extended to cover functionality required for Live Risk: identifying the objects that benefit from AADC, managing the kernel lifecycle, distributing market data, aggregating the results, etc.

As presented in the figure above, when pricing a new configuration of a trade, the kernel generation time is part of the overall execution time. To reduce this overhead, MatLogica has developed a specialised graph compiler that generates machine code directly during the program execution, without relying on an external compiler.

However, even with this innovation, the need for kernel generation needs to be thoroughly thought through for optimal performance, as it can take seconds and even minutes for more complex structures. We suggest the following two approaches to remove the problem:
  • Large but static valuation - e.g. an extensive calculation that remains predominantly static during the trading day, such as portfolio pricing and risk. The composition of the trades, models, and trade parameters does not need to change much during the day. It should be sufficient to make such a recording at the beginning of the day and reuse it throughout the trading day. But there is in fact no limitation on the frequency of re-recordings, and they can also be triggered by events, including significant changes to portfolio composition. It is important to note that for larger functions the kernel generation time will be longer, and the need for regeneration should be evaluated accordingly.
  • Smaller ad-hoc valuations, such as intraday activities: for any new trades, or for pre-trade analysis, additional kernels can be generated on the fly and sent to the cloud for execution. The position and risks can then be aggregated in the Quant Library as required.
In practice, the solution architects might want to segregate the recordings according to the nature of the business, the portfolio structure, instruments, update frequency, business requirements, and the existing technology stack to deliver optimal performance.

Another design decision is around market data calibration. This can be done within the pricing/risk kernel, or segregated into a separate recording to ensure all trades are priced and risked on the same market data/models to guarantee consistency between otherwise independent calculations and deliver the capability to easily reproduce calculations.

The AADC Kernel Lifecycle

The kernel creation is triggered by the Quant Library on an event that changes the state of the trade portfolio. This can be automatic, based on business logic, such as when the books are reconciled, or at the start of a new trading day. It can also be triggered by a trader who wishes to price a new trade. Once the kernel is generated, it can be shipped to the cloud (grid). At this point, it is ready to receive market data updates, to be processed by the kernel, and can include curve building, solvers/minimisers etc. The risk/pricing information is then returned for aggregation and manipulation, after which it is ready for display to the trader. The architecture presented above can be also used to process stress tests: the kernels can be re-used to process arbitrary inputs.

Again, this is a high-level diagram, and in practice an organisation will have numerous portfolios aligned to their organisational structure, asset classes, books, desks, etc. The “Quant Library” will need to ensure that any expired kernels are removed to avoid wasting resources.


In this brief document, we have presented a high-level overview of a solution architecture that uses MatLogica AADC for cloud execution without exposing any proprietary data or models.

MatLogica’s AADC solution enables:
  • Fast results; semi-automated integration approach, in combination with the Automated IFT allows fully automatic differentiation of exact-fit calibration routines providing an approximate solution to nearly-exact calibration problems;
  • Portfolio risk in ms, with cost reductions of ~100x;
  • Re-use of the same approach for scenario analysis;
  • Re-use of the existing grid task distribution layer;
  • Water-tight security as the binary kernels only contain a sequence of elementary operations that are required to compute the prices and risk, while proprietary analytics and trade data remain on premises.
See the 'Live Risk' demo at!

Please contact if you require more information.