Stochastic gradient with least-squares control variates
- 👤 Speaker: Fabio Nobile (EPFL - Ecole Polytechnique Fédérale de Lausanne)
- 📅 Date & Time: Thursday 03 July 2025, 10:30 - 11:30
- 📍 Venue: Seminar Room 2, Newton Institute
Abstract
The stochastic gradient (SG) method is a widely used approach for solving stochastic optimization problems, but its convergence is typically slow. Existing variance reduction techniques, such as SAGA , improve convergence by leveraging stored gradient information; however, they are restricted to settings where the objective functional is a finite sum, and their performance degrades when the number of terms in the sum is large. In this work, we propose a novel approach that is best suited when the objective is given by an expectation over random variables with a continuous probability distribution. Our method constructs a control variate by fitting a linear model to past gradient evaluations using weighted discrete least-squares, effectively reducing variance while preserving computational efficiency. We establish theoretical sublinear convergence guarantees and demonstrate the method’s effectiveness through numerical experiments on random PDE -constrained optimization problems.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 2, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Fabio Nobile (EPFL - Ecole Polytechnique Fédérale de Lausanne)
Thursday 03 July 2025, 10:30-11:30