BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Langevin MCMC: theory and methods - Eric François Moulines (Tél
 écom ParisTech)
DTSTART:20170707T080000Z
DTEND:20170707T084500Z
UID:TALK73182@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:<span><span>Nicolas Brosse\, Ecole Polytechnique\, Paris<br>Al
 ain Durmus\, Telecom ParisTech and Ecole Normale Sup&eacute\;rieure Paris-
 Saclay<br>Marcelo Pereira\, Herriot-Watt University\, Edinburgh<br><br><br
 >The complexity and sheer size of modern datasets\, to whichever increasin
 gly demanding questions are posed\, give rise to major challenges. Traditi
 onal simulation methods often scale poorly with data size and model comple
 xity and thus fail for the most complex of modern problems.<br></span>We a
 re considering the problem of sampling from a log-concave distribution. Ma
 ny problems in machine learning fall into this framework\, <br>such as lin
 ear ill-posed inverse problems with sparsity-inducing priors\, or large sc
 ale Bayesian binary regression. <br><br><br><br>The purpose of this lectur
 e is to explain how we can use ideas which have proven very useful in mach
 ine learning community to<br>solve large-scale optimization problems to de
 sign efficient sampling algorithms. <br>Most of the efficient algorithms k
 now so far may be seen as variants of the gradient descent algorithms\, <b
 r>most often coupled with &laquo\; partial updates &raquo\; (coordinates d
 escent algorithms). This\, of course\, suggests studying methods derived f
 rom Euler discretization of the Langevin diffusion. Partial updates may in
  this context as &laquo\; Gibbs steps &raquo\;This algorithm may be genera
 lized in the non-smooth case by &laquo\; regularizing &raquo\; the objecti
 ve function. The Moreau-Yosida inf-convolution algorithm is an appropriate
  candidate in such case.<br><span><br>We will prove convergence results fo
 r these algorithms with explicit convergence bounds both in Wasserstein di
 stance and in total variation. Numerical illustrations will be presented (
 on the computation of Bayes factor for model choice\, Bayesian&nbsp\;analy
 sis of high-dimensional regression\, aggregation of estimators) to illustr
 ate our results.</span></span>
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
