SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 301325 of 6689 papers

TitleStatusHype
No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by ThemselvesCode2
Generative AI for Character Animation: A Comprehensive Survey of Techniques, Applications, and Future DirectionsCode2
Enhancing Person-to-Person Virtual Try-On with Multi-Garment Virtual Try-OffCode2
Flux Already Knows -- Activating Subject-Driven Image Generation without TrainingCode2
OmniCaptioner: One Captioner to Rule Them AllCode2
HiFlow: Training-free High-Resolution Image Generation with Flow-Aligned GuidanceCode2
Gaussian Mixture Flow Matching ModelsCode2
UniToken: Harmonizing Multimodal Understanding and Generation through Unified Visual EncodingCode2
ILLUME+: Illuminating Unified MLLM with Dual Visual Tokenization and Diffusion RefinementCode2
TextCrafter: Accurately Rendering Multiple Texts in Complex Visual ScenesCode2
Harmonizing Visual Representations for Unified Multimodal Understanding and GenerationCode2
Unified Multimodal Discrete DiffusionCode2
Learning Hazing to Dehazing: Towards Realistic Haze Generation for Real-World Image DehazingCode2
Scaling Down Text Encoders of Text-to-Image Diffusion ModelsCode2
Ultra-Resolution Adaptation with EaseCode2
Single Image Iterative Subject-driven Generation and EditingCode2
Tokenize Image as a SetCode2
GenStereo: Towards Open-World Generation of Stereo Images and Unsupervised MatchingCode2
Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context ReflectionCode2
Towards Better Alignment: Training Diffusion Models with Reinforcement Learning Against Sparse RewardsCode2
Autoregressive Image Generation with Randomized Parallel DecodingCode2
Neighboring Autoregressive Modeling for Efficient Visual GenerationCode2
LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference OptimizationCode2
Seedream 2.0: A Native Chinese-English Bilingual Image Generation Foundation ModelCode2
Learning Few-Step Diffusion Models by Trajectory Distribution MatchingCode2
Show:102550
← PrevPage 13 of 268Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified