skip to content

Department of Applied Mathematics and Theoretical Physics

Diffusion-based generative models are a recent class of generative models showing state-of- art performances in many data generation tasks. These models use a forward process to progressively corrupt samples from a target data distribution with noise and then learn to reverse this process for generation of new samples. In this talk, we provide full theoretical guarantees for the convergence behaviour of such models. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known estimates for the Wasserstein distance of order two between the data distribution and our sampling algorithm. Beyond the motivating example, we present our results for sampling from strongly log-concave distributions using an $L^2$-accurate score estimation assumption, which is formed under an expectation with respect to a stochastic optimizer and our novel auxiliary process that uses only known information.

Further information

Time:

18Jul
Jul 18th 2024
10:00 to 11:00

Venue:

External

Speaker:

Ying Zhang (Hong Kong University of Science and Technology (Guangzhou))

Series:

Isaac Newton Institute Seminar Series