SOTAVerified

Autoregressive Image Generation without Vector Quantization

2024-06-17Code Available5· sign in to hype

Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, Kaiming He

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Conventional wisdom holds that autoregressive models for image generation are typically accompanied by vector-quantized tokens. We observe that while a discrete-valued space can facilitate representing a categorical distribution, it is not a necessity for autoregressive modeling. In this work, we propose to model the per-token probability distribution using a diffusion procedure, which allows us to apply autoregressive models in a continuous-valued space. Rather than using categorical cross-entropy loss, we define a Diffusion Loss function to model the per-token probability. This approach eliminates the need for discrete-valued tokenizers. We evaluate its effectiveness across a wide range of cases, including standard autoregressive models and generalized masked autoregressive (MAR) variants. By removing vector quantization, our image generator achieves strong results while enjoying the speed advantage of sequence modeling. We hope this work will motivate the use of autoregressive generation in other continuous-valued domains and applications. Code is available at: https://github.com/LTH14/mar.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet 256x256MAR-H, Diff LossFID1.55Unverified
ImageNet 256x256MAR-L, Diff LossFID1.78Unverified
ImageNet 256x256MAR-B, Diff LossFID2.31Unverified
ImageNet 512x512MAR-L, Diff LossFID1.73Unverified

Reproductions