Abstract
Generative models based on dynamic transport have recently led to significant advances in unsupervised learning. At the mathematical level, these models are mainly designed around the construction of a function between two probability distributions that transforms samples of the former into samples of the latter. Although these methods were initially introduced in the context of image generation, they have found a wide range of applications, notably in scientific computing, where they offer interesting ways of reconsidering complex problems once deemed unsolvable due to the curse of dimensionality.
In this presentation, I'll discuss the mathematical foundations of generative models based on flows and diffusions, and show how a better understanding of their inner workings can help improve their design. These results indicate how to structure transport to best reach complex target distributions while maintaining computational efficiency, both at the learning and sampling stages.
I will also discuss applications of generative AI in scientific computing, particularly in the context of Monte Carlo sampling, with applications to statistical mechanics and Bayesian inference, as well as probabilistic forecasting, with applications to fluid dynamics and atmospheric and oceanic sciences.