SOTAVerified

Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

2024-10-19Code Available2· sign in to hype

Mingyuan Zhou, Huangjie Zheng, Yi Gu, Zhendong Wang, Hai Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Score identity Distillation (SiD) is a data-free method that has achieved SOTA performance in image generation by leveraging only a pretrained diffusion model, without requiring any training data. However, its ultimate performance is constrained by how accurate the pretrained model captures the true data scores at different stages of the diffusion process. In this paper, we introduce SiDA (SiD with Adversarial Loss), which not only enhances generation quality but also improves distillation efficiency by incorporating real images and adversarial loss. SiDA utilizes the encoder from the generator's score network as a discriminator, allowing it to distinguish between real images and those generated by SiD. The adversarial loss is batch-normalized within each GPU and then combined with the original SiD loss. This integration effectively incorporates the average "fakeness" per GPU batch into the pixel-based SiD loss, enabling SiDA to distill a single-step generator. SiDA converges significantly faster than its predecessor when distilled from scratch, and swiftly improves upon the original model's performance during fine-tuning from a pre-distilled SiD generator. This one-step adversarial distillation method establishes new benchmarks in generation performance when distilling EDM diffusion models, achieving FID scores of 1.110 on ImageNet 64x64. When distilling EDM2 models trained on ImageNet 512x512, our SiDA method surpasses even the largest teacher model, EDM2-XXL, which achieved an FID of 1.81 using classifier-free guidance (CFG) and 63 generation steps. In contrast, SiDA achieves FID scores of 2.156 for size XS, 1.669 for S, 1.488 for M, 1.413 for L, 1.379 for XL, and 1.366 for XXL, all without CFG and in a single generation step. These results highlight substantial improvements across all model sizes. Our code is available at https://github.com/mingyuanzhou/SiD/tree/sida.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AFHQ-v2 64x64SiDA-EDMFID1.28Unverified
CIFAR-10SiDA-EDMFID1.4Unverified
FFHQ 64x64SiDA-EDMFID1.04Unverified
ImageNet 512x512SiDA-EDM2-L (777M)FID1.41Unverified
ImageNet 512x512SiDA-EDM2-M (498M)FID1.49Unverified
ImageNet 512x512SiDA-EDM2-S (280M)FID1.67Unverified
ImageNet 512x512SiD-EDM2-XL (1.1B)FID1.89Unverified
ImageNet 512x512SiD-EDM2-XXL (1.5B)FID1.97Unverified
ImageNet 512x512SiD-EDM2-M (498M)FID2.06Unverified
ImageNet 512x512SiDA-EDM2-XS (125M)FID2.16Unverified
ImageNet 512x512SiD-EDM2-S (280M)FID2.71Unverified
ImageNet 512x512SiD-EDM2-XS (125M)FID3.35Unverified
ImageNet 512x512SiD-EDM2-L (777M)FID1.91Unverified
ImageNet 512x512SiDA-EDM2-XXL (1.5B)FID1.37Unverified
ImageNet 512x512SiDA-EDM2-XL (1.1B)FID1.38Unverified
ImageNet 64x64SiDA-EDMFID1.11Unverified

Reproductions