Events and Conferences
December 7, 2021
QuantMinds 2021 - held in Barcelona
AAD integration strategies for top performance and ease of use" by Dmitri Goloubentsev from MatLogica as presented at QuantMinds in Barcelona on 7th December 2021.
November 17, 2021
WBS 17th Edition
A recording from WBS 17th Edition with Dmitri Goloubentsev on MatLogica's AADC use for American Monte Carlo Option Pricing Presenting an approach to efficiently implement adjoint differentiation for Longstaff Schwartz
June 4, 2021
SIAM Conference on Financial Mathematics and Engineering (FM21) “New Hpc Paradigm for Object Oriented Languages”
Based on our results with QuantLib and ORE, we demonstrated how MatLogica’s library works and introduced the idea of integration complexity.
May 18, 2021
‘Coffee chat’ with Pete Baker (Intel).
Leaders from Intel, Matlogica, and Quantifi sat down to discuss how Intel Xeon Scalable processors and Intel Software helps to improve the performance of financial models analyzing financial risks and to effectively detect fraud.
March 22, 2021
The Quantitative Finance Conference Spring Edition (online) “AAD Integration Strategies”
IIn this presentation, we introduced MatLogica’s approach to achieving top performance and demonstrated the implementation process on a well-known open-source quant library – QuantLib, yielding 150x performance improvement for xVA calculations on a single core.
February 12, 2021
C++ London. Supercharging HPC for Object Oriented Languages
Dmitri Goloubentsev introduced the Matclogica’s technique for hugely boosting performance of repetitive calculations.
August 11, 2020
23rd European Workshop on Automatic Differentiation
The First Virtual, Worldwide Workshop on Automatic Differentiation.
March 21, 2020
Quant Summit Europe Risk.net Events
December 11, 2019
Intel Software Development Workshop for Enterprise, HPC and AI
November 15, 2019
Presented : Breaking the Primal Barrier, Quant Insights AI, Machine Learning and Risk, London
October 15, 2019
Presented the idea behind #AAD Compiler at the 15th WBS conference(XVA, AAD stream) in Rome.
Adjoint Differentiation for generic matrix functions
No doubt, AAD is amazing. However, implementing it in practice has a lot of subtleties. For instance, how to deal with operations requiring an SVD decomposition? Our researchers have found an elegant solution to this problem.
More Than a Thousand-fold Speedup for xVA Pricing Calculations with Intel® Xeon® Scalable Processors
Intel-led white paper demonstrating an up to 1770x performance increase for XVA pricing (and 830 for XVA risks!) on Intel processors when using Matlogica AADC. It is open-source and available at GitHub.
A New Approach to Parallel Computing Using Automatic Differentiation: Getting Top Performance on Modern Multicore Systems
A paper in Parallel Universe Magazine №40 featuring a new approach that turns object-oriented, single-thread, scalar code into AVX2/AVX512 vectorized multi-thread and thread-safe lambda functions with no runtime penalty
Open Source Benchmark
Open-Source Benchmark demonstrating a leap in performance for valuation and AAD risk calculations using AADC on Intel Scalable Xeon CPUs.
AAD and calibration
Remarks on stochastic automatic adjoint differentiation and calibration of financial models.
AAD: Breaking the Primal Barrier
Dmitri Goloubentsev and Evgeny Lakshtanov wrote an article for Wilmott Magazine on how merging Code Transformation and Operator Overloading techniques leads to a major performance boost.