19 décembre 2024, 11h00–12h15
Toulouse
Salle Auditorium 3
MAD-Stat. Seminar
Résumé
Gradient descent flows in a space of probability measures can appear either directly in optimisation problems (where one wants to minimise some functional depending on probability measures, as in variational inference), or as an interpretation of some interacting particles models (for which the evolution of the density of particles turns out to be the gradient descent of a well-chosen functional). The first part of the talk will be a pedagogical introduction to these gradient descent flows, interacting particle systems and their associated non-linear PDE, and entropy methods for quantitative long-time convergence of diffusion processes. In the second part, we will present a recent joint work with Julien Reygner which establishes convergence rates towards local minimizers using functional inequalities which generalize the classical logarithmic Sobolev inequality. The method will be illustrated on the granular media equation in a double-well potential which, below a critical temperature, admits several stationary solutions (so that the convergence cannot be global).