SOTAVerified

How far can we go with ImageNet for Text-to-Image generation?

2025-02-28Code Available0· sign in to hype

L. Degeorge, A. Ghosh, N. Dufour, D. Picard, V. Kalogeiton

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent text-to-image generation models have achieved remarkable results by training on billion-scale datasets, following a `bigger is better' paradigm that prioritizes data quantity over availability (closed vs open source) and reproducibility (data decay vs established collections). We challenge this established paradigm by demonstrating that one can match or outperform models trained on massive web-scraped collections, using only ImageNet enhanced with well-designed text and image augmentations. With this much simpler setup, we achieve a +1% overall score over SD-XL on GenEval and +0.5% on DPGBench while using just 1/10th the parameters and 1/1000th the training images. This opens the way for more reproducible research as ImageNet is a widely available dataset and our standardized training setup does not require massive compute resources.

Tasks

Reproductions