Modern compilers for object-oriented languages are not optimized for calculation-intensive tasks. Extensive use of abstraction and virtual functions allows to develop an easy-to-read code but the performance penalty is high.
Writing high-performing vectorized and multi-thread safe code is a tedious and time-consuming task, while the result is usually hard to maintain. MatLogica helps developers to focus on adding value, taking care of performance.
A game-changing innovation
How do MatLogica's products work?
MatLogica’s easy-to-integrate JIT compiler converts user code (C++ or Python) into machine code with a minimal number of operations theoretically necessary to complete the task.
That results in far fewer operations loaded onto the CPU. We then add vectorization and multi-threading, extracting the maximal theoretically possible speed-up from a modern CPU - something other libraries fail to achieve.
MatLogica’s AADC library additionally computes the reverse accumulation equations directly in the machine code (other libraries use tape), resulting in far better performance than alternative approaches
Our tests show that AADC calculates both the original function and derivatives faster than the original code calculates the function alone, often by a factor of 6-100. The is achieved with minimal changes to the original code since Matlogica’s compiler does virtually all the work.
From pure acceleration to fast AAD
Our product range
MatLogica’s accelerator utilizes native CPU vectorization and multi-threading, delivering performance comparable to GPU. For problems such as Monte-Carlo simulations, historical analysis, ‘what if’ scenarios, the speed can be increased 6-100x, depending on the original performance.
Calculating derivatives is essential in finance, machine learning, and numerous other scientific and engineering industries. Our approach enables AAD calculations in legacy code, where others require extensive efforts and changing the source code.
We are actively researching GPU applications of our technology and expect to introduce the first GPU-compatible release in late 2022/early 2023. It will enable object-oriented programming (in C++ or Python) on a GPU without complicating the infrastructure.