Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

We are now focusing on parametric models that admit a Gibbs energy. The aim is to estimate the parameters that best approximate the distribution of the training base samples.

The distribution to be estimated is approximated by a distribution with a parameterized Gibbs energy. The parameter is selected by minimizing the Kullback-Leibler divergence, or equivalently by maximizing the log-likelihood. The gradient descent algorithm requires us to calculate the gradient of the log-likelihood, which involves the partition function. This term is time-consuming to calculate using a Monte-Carlo algorithm. Indeed, it requires sampling parameterized distributions during optimization. Optimization can be rewritten as entropy maximization, under constraints given by moment values. We study an exponential model corresponding to the phi4 model of statistical physics, which is a continuous approximation of the Ising model.