Interested in these opportunities?
Contact us and we will arrange a free demo for you and your team.
We developed a computational framework that seamlessly makes numerically complex code run much faster. It includes a cutting-edge variant of the Automatic Adjoint differentiation method and can be used as a Machine Learning platform in C++ optimized for quant applications.
You can code in traditional object-oriented languages such as C++ or Python while our product takes care of performance.
Your code will be easy to maintain. It will perform 6-100x faster on modern CPUs. The AADC library will also compute sensitivities automatically when required.
We have extensive experience integrating our AADC library into existing analytics projects.
Unlike our competitors, our quant team will complete initial integration within 2 weeks. We will speed up your analytics by 6-100x and compute sensitivities automatically.
Speed up your analytics by 6-100x and compute sensitivities using AAD, automatically and quickly
MatLogica can be integrated in a few weeks, while others take months or years. The code changes are minimal
Our benchmarks are impressive. Together with Intel, we demonstrate 1770x speedup for XVA. And there’s more!
We developed a just-in-time compiler tailored for complex repetitive calculations and Automatic Adjoint Differentiation. It utilises native CPU vectorisation (AVX2/512) and is fully multi-thread safe, by design.
Our patent-pending technology is easy to integrate. We crossed two main approaches to AAD by using overloaded operators to autogenerate an adjoint version of the original function at runtime. The code is easy to read and maintain due to minimal code changes. When sensitivities are required - for finance, insurance risks, machine learning, or industrial mathematics - they are calculated automatically.
How it works
Our JIT compiler sifts through the code in run-time to extract all operations relevant to a specific task. It then optimizes the calculations and generates binary kernels to be executed on the CPU directly. The first kernel takes care of the forward calculations, the second (optional) performs the adjoint calculations for the sensitivities.
Instead of calling the original function, we perform repetitive calculations using the fully vectorized, multi-thread safe, and NUMA aware kernel that delivers the groundbreaking speed-up!
Watch this presentation from a Quantitative Finance Conference, where Dmitri Goloubentsev presents AAD integration strategies. You will also see a real-life example of integrating AADC into an open-source library - Quantlib
A paper in Parallel Universe Magazine №40 featuring a new approach that turns object-oriented, single-thread, scalar code into AVX2/AVX512 vectorized multi-thread and thread-safe lambda functions with no runtime penalty.
MatLogica’s applications are far beyond traditional AAD.
Our product can be used to effectively develop Machine Learning.
We also know how to do algorithmic differentiation for Longstaff Schwartz regression.