Towards a non-asymptotic understanding of diffusion-based generative models
- đ¤ Speaker: Yuting Wei (University of Pennsylvania)
- đ Date & Time: Thursday 04 July 2024, 10:30 - 12:00
- đ Venue: External
Abstract
Diffusion models, which convert noise into new data instances by learning to reverse a Markov diffusion process, have become a cornerstone in contemporary generative modeling. While their practical power has now been widely recognized, the theoretical underpinnings remain far from mature. In this talk, I will introduce a suite of non-asymptotic theory towards understanding the data generation process of diffusion models in discrete time, assuming access to reliable estimates of the (Stein) score functions. For a popular deterministic sampler (based on the probability flow ODE ), we establish a convergence rate proportional to $1/T$ (with $T$ the total number of steps), improving upon past results; for another mainstream stochastic sampler (i.e., a type of the denoising diffusion probabilistic model (DDPM)), we derive a convergence rate proportional to $1/\sqrt{T}$, matching the state-of-the-art theory. We will also discuss novel training-free algorithms to accelerate these samplers. We design two accelerated variants, improving the convergence to $1/T^2$ for the ODE -based sampler and $1/T$ for the DDPM -type sampler, which might be of independent theoretical and empirical interest.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- External
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Yuting Wei (University of Pennsylvania)
Thursday 04 July 2024, 10:30-12:00