SOTAVerified

Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation

2024-04-05Code Available2· sign in to hype

Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce Score identity Distillation (SiD), an innovative data-free method that distills the generative capabilities of pretrained diffusion models into a single-step generator. SiD not only facilitates an exponentially fast reduction in Fr\'echet inception distance (FID) during distillation but also approaches or even exceeds the FID performance of the original teacher diffusion models. By reformulating forward diffusion processes as semi-implicit distributions, we leverage three score-related identities to create an innovative loss mechanism. This mechanism achieves rapid FID reduction by training the generator using its own synthesized images, eliminating the need for real data or reverse-diffusion-based generation, all accomplished within significantly shortened generation time. Upon evaluation across four benchmark datasets, the SiD algorithm demonstrates high iteration efficiency during distillation and surpasses competing distillation approaches, whether they are one-step or few-step, data-free, or dependent on training data, in terms of generation quality. This achievement not only redefines the benchmarks for efficiency and effectiveness in diffusion distillation but also in the broader field of diffusion-based generation. The PyTorch implementation is available at https://github.com/mingyuanzhou/SiD

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AFHQ-v2 64x64SiDFID1.71Unverified
CIFAR-10SiDFID1.71Unverified
FFHQ 64x64SiDFID1.55Unverified
ImageNet 64x64SiDFID1.52Unverified

Reproductions