Math-heavy programming as supposed to be

We developed a computational framework that seamlessly makes numerically complex code run much faster. It includes a cutting-edge variant of the Automatic Adjoint differentiation method and can be used as a Machine Learning platform in C++ optimized for quant applications.

Learn More
Try it now

Supercharge analytics

You can code in traditional object-oriented languages such as C++ or Python while our product takes care of performance.

Your code will be easy to maintain. It will perform 6-100x faster on modern CPUs. The AADC library will also compute sensitivities automatically when required.

Easy integration

We have extensive experience integrating our AADC library into existing analytics projects.

Unlike our competitors, our quant team will complete initial integration within 2 weeks. We will speed up your analytics by 6-100x and compute sensitivities automatically.

Speed up and compute derivatives

Speed up your analytics by 6-100x and compute sensitivities using AAD, automatically and quickly

See our Products

Straightforward integration

MatLogica can be integrated in a few weeks, while others take months or years. The code changes are minimal

See Our Services

Proven performance

Our benchmarks are impressive. Together with Intel, we demonstrate 1770x speedup for XVA. And there’s more!

See Our Research

Our product

Extract maximum performance from a modern CPU

We developed a just-in-time compiler tailored for complex repetitive calculations and Automatic Adjoint Differentiation. It utilises native CPU vectorisation (AVX2/512) and is fully multi-thread safe, by design.

Our patent-pending technology is easy to integrate. We crossed two main approaches to AAD by using overloaded operators to autogenerate an adjoint version of the original function at runtime. The code is easy to read and maintain due to minimal code changes. When sensitivities are required - for finance, insurance risks, machine learning, or industrial mathematics - they are calculated automatically.

How it works

Breakthrough technology

Our JIT compiler sifts through the code in run-time to extract all operations relevant to a specific task. It then optimizes the calculations and generates binary kernels to be executed on the CPU directly. The first kernel takes care of the forward calculations, the second (optional) performs the adjoint calculations for the sensitivities.

Instead of calling the original function, we perform repetitive calculations using the fully vectorized, multi-thread safe, and NUMA aware kernel that delivers the groundbreaking speed-up!

Would you like to know more?

MatLogica’s applications are far beyond traditional AAD.

Our product can be used to effectively develop Machine Learning.

We also know how to do algorithmic differentiation for Longstaff Schwartz regression.