On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates
- đ¤ Speaker: Ying Zhang (Hong Kong University of Science and Technology (Guangzhou))
- đ Date & Time: Thursday 18 July 2024, 10:00 - 11:00
- đ Venue: External
Abstract
Diffusion-based generative models are a recent class of generative models showing state-of- art performances in many data generation tasks. These models use a forward process to progressively corrupt samples from a target data distribution with noise and then learn to reverse this process for generation of new samples. In this talk, we provide full theoretical guarantees for the convergence behaviour of such models. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known estimates for the Wasserstein distance of order two between the data distribution and our sampling algorithm. Beyond the motivating example, we present our results for sampling from strongly log-concave distributions using an $L^2$-accurate score estimation assumption, which is formed under an expectation with respect to a stochastic optimizer and our novel auxiliary process that uses only known information.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- External
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Ying Zhang (Hong Kong University of Science and Technology (Guangzhou))
Thursday 18 July 2024, 10:00-11:00