Alain Durmus\, Telecom ParisTech and Eco le Normale Supé\;rieure Paris-Saclay

Marc elo Pereira\, Herriot-Watt University\, Edinburgh< br>

The complexity and sheer size of modern datasets\, to whichever increasingly demanding qu estions are posed\, give rise to major challenges. Traditional simulation methods often scale poorly with data size and model complexity and thus fail for the most complex of modern problems.

We are considering the problem of sampling from a log-concave distribution. Many problems in machi ne learning fall into this framework\,

such as linear ill-posed inverse problems with sparsity-i nducing priors\, or large scale Bayesian binary re gression.

The purpose of this lect ure is to explain how we can use ideas which have proven very useful in machine learning community t o

solve large-scale optimization problems to de sign efficient sampling algorithms.

Most of th e efficient algorithms know so far may be seen as variants of the gradient descent algorithms\,

most often coupled with «\; partial updates & raquo\; (coordinates descent algorithms). This\, o f course\, suggests studying methods derived from Euler discretization of the Langevin diffusion. Pa rtial updates may in this context as «\; Gibb s steps »\;This algorithm may be generalized in the non-smooth case by «\; regularizing &r aquo\; the objective function. The Moreau-Yosida i nf-convolution algorithm is an appropriate candida te in such case.

We will prove conver gence results for these algorithms with explicit c onvergence bounds both in Wasserstein distance and in total variation. Numerical illustrations will be presented (on the computation of Bayes factor f or model choice\, Bayesian \;analysis of high- dimensional regression\, aggregation of estimators ) to illustrate our results. LOCATION:Seminar Room 1\, Newton Institute CONTACT:INI IT END:VEVENT END:VCALENDAR