BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Towards a non-asymptotic understanding of diffusion-based generati
 ve models - Yuting Wei (University of Pennsylvania)
DTSTART:20240704T093000Z
DTEND:20240704T110000Z
UID:TALK218740@talks.cam.ac.uk
DESCRIPTION:Diffusion models\, which convert noise into new data instances
  by learning to reverse a Markov diffusion process\, have become a corners
 tone in contemporary generative modeling. While their practical power has 
 now been widely recognized\, the theoretical underpinnings remain far from
  mature. In this talk\, I will introduce a suite of non-asymptotic theory 
 towards understanding the data generation process of diffusion models in d
 iscrete time\, assuming access to reliable estimates of the (Stein) score 
 functions. For a popular deterministic sampler (based on the probability f
 low ODE)\, we establish a convergence rate proportional to $1/T$ (with $T$
  the total number of steps)\, improving upon past results\; for another ma
 instream stochastic sampler (i.e.\, a type of the denoising diffusion prob
 abilistic model (DDPM))\, we derive a convergence rate proportional to $1/
 \\sqrt{T}$\, matching the state-of-the-art theory. We will also discuss no
 vel training-free algorithms to accelerate these samplers. We design two a
 ccelerated variants\, improving the convergence to $1/T^2$ for the ODE-bas
 ed sampler and $1/T$ for the DDPM-type sampler\, which might be of indepen
 dent theoretical and empirical interest.
LOCATION:External
END:VEVENT
END:VCALENDAR
