Manifold Generalization Provably Proceeds Memorization in Diffusion Models
Zebang Shen, Ya-Ping Hsieh, Niao He
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Diffusion models often generate novel samples even when the learned score is only coarse -- a phenomenon not accounted for by the standard view of diffusion training as density estimation. In this paper, we show that, under the manifold hypothesis, this behavior can instead be explained by coarse scores capturing the geometry of the data while discarding the fine-scale distributional structure of the population measure~μ_data. Concretely, whereas estimating the full data distribution μ_data supported on a k-dimensional manifold is known to require the classical minimax rate O(N^-1/k), we prove that diffusion models trained with coarse scores can exploit the regularity of the manifold support and attain a near-parametric rate toward a different target distribution. This target distribution has density uniformly comparable to that of~μ_data throughout any O(N^-β/(4k))-neighborhood of the manifold, where β denotes the manifold regularity. Our guarantees therefore depend only on the smoothness of the underlying support, and are especially favorable when the data density itself is irregular, for instance non-differentiable. In particular, when the manifold is sufficiently smooth, we obtain that generalization -- formalized as the ability to generate novel, high-fidelity samples -- occurs at a statistical rate strictly faster than that required to estimate the full population distribution~μ_data.