Salle 5, Site Marcelin Berthelot
Open to all
-

Abstract

We study adaptive gradient descent algorithms using Langevin dynamics (SGLD) to solve optimization and inference problems. These algorithms, inspired by stochastic analysis, consist of gradient descent with the addition of exogenous Gaussian noise in order to escape local minima and saddle points, in connection with the simulated annealing algorithm. In contrast to the classical stochastic Langevin differential equation, we focus on the case where the exogenous noise is adaptive, i.e. non-constant and depends on the position of the procedure.

In the first part of the paper, we present proofs of convergence for these algorithms. In the second part, we present examples of their application to optimization problems arising in machine learning and numerical probability.

Speaker(s)

Pierre Bras

Laboratoire de Probabilités, Statistique et Modélisation, Sorbonne University

Events