SOTAVerified

O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions

2024-09-27Unverified0· sign in to hype

Gen Li, Yuling Yan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Score-based diffusion models, which generate new data by learning to reverse a diffusion process that perturbs data from the target distribution into noise, have achieved remarkable success across various generative tasks. Despite their superior empirical performance, existing theoretical guarantees are often constrained by stringent assumptions or suboptimal convergence rates. In this paper, we establish a fast convergence theory for the denoising diffusion probabilistic model (DDPM), a widely used SDE-based sampler, under minimal assumptions. Our analysis shows that, provided _2-accurate estimates of the score functions, the total variation distance between the target and generated distributions is upper bounded by O(d/T) (ignoring logarithmic factors), where d is the data dimensionality and T is the number of steps. This result holds for any target distribution with finite first-order moment. Moreover, we show that with careful coefficient design, the convergence rate improves to O(k/T), where k is the intrinsic dimension of the target data distribution. This highlights the ability of DDPM to automatically adapt to unknown low-dimensional structures, a common feature of natural image distributions. These results are achieved through a novel set of analytical tools that provides a fine-grained characterization of how the error propagates at each step of the reverse process.

Tasks

Reproductions