BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Optimal sampling: from linear to nonlinear approximation - Anthony
  Nouy (Université de Nantes)
DTSTART:20240716T083000Z
DTEND:20240716T091000Z
UID:TALK219118@talks.cam.ac.uk
DESCRIPTION:Abstract:&nbsp\;We consider the approximation of functions fro
 m point evaluations\, using linear or nonlinear approximation tools. For l
 inear approximation\, recent results show that weighted least-squares proj
 ections allow to obtain quasi-optimal approximations in $L^2$ (in expectat
 ion) with near to optimal sampling budget. This can be achieved by drawing
  i.i.d. samples from suitable distributions (depending on the linear appro
 ximation tool) and subsampling methods [1\,2].In a first part of this talk
 \, we review different strategies based on i.i.d. sampling and present alt
 ernative strategies based on repulsive point processes (or volume sampling
 ) that allow to perform the same task with a reduced sampling complexity [
 3].In a second part\, we show how these methods can be used to approximate
  functions with nonlinear approximation tools by coupling iterative algori
 thms on manifolds and optimal sampling methods for the (quasi-)projection 
 onto successive linear spaces [4].&nbsp\;The proposed algorithm can be int
 erpreted as a stochastic gradient method using optimal sampling\, with pro
 vable convergence properties under classical convexity and smoothness assu
 mptions. It can also be interpreted as a natural gradient descent on a man
 ifold embedded in $L^2$\, which appears to be a Newton-type algorithm when
  written in terms of the coordinates of a parametrized manifold. In the ca
 se where we only have access to generating systems of successive linear sp
 aces\, iterative methods can be used to obtain an approximation of optimal
  distributions [5].Finally\, we come back on linear approximation and pres
 ent a new approach for obtaining quasi-optimal approximations for function
 s in reproducing kernel Hilbert spaces\, using a kernel-based projection a
 nd volume sampling [6].&nbsp\;\nThese are joint works with R. Gruhlke\, B.
  Michel\, and P. Trunschke\nReferences:[1] M. Sonnleitner and M. Ullrich. 
 On the power of iid information for linear approximation. Journal of Appli
 ed and Numerical Analysis\, 1(1):88&ndash\;126\, Dec. 2023.[2] C. Habersti
 ch\, A. Nouy\, and G. Perrin. Boosted optimal weighted least-squares\, Mat
 hematics of Computation\, 91(335) (2022)\, 1281&ndash\;1315.[3] A. Nouy an
 d B. Michel. Weighted least-squares approximation with determinantal point
  processes and generalized volume sampling. arXiv:2312.14057.[4] R. Gruhlk
 e\, A. Nouy\, and P. Trunschke. Optimal sampling for stochastic and natura
 l gradient descent. &nbsp\;arXiv:2402.03113.[5] P. Trunschke and A. Nouy. 
 Optimal sampling for least squares approximation with general dictionaries
 \, arXiv: 2407.07814.[6] P. Trunschke and A. Nouy. Almost-sure quasi-optim
 al approximation in reproducing kernel Hilbert spaces\, arXiv: 2407.06674.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
