SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 51100 of 6689 papers

TitleStatusHype
FasterDiT: Towards Faster Diffusion Transformers Training without Architecture ModificationCode5
Magic Clothing: Controllable Garment-Driven Image SynthesisCode5
ELLA: Equip Diffusion Models with LLM for Enhanced Semantic AlignmentCode5
CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion ModelsCode5
Bridging Different Language Models and Generative Vision Models for Text-to-Image GenerationCode5
BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and DatasetCode5
Learning Flow Fields in Attention for Controllable Person Image GenerationCode5
MV-Adapter: Multi-view Consistent Image Generation Made EasyCode5
DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven GenerationCode5
Less-to-More Generalization: Unlocking More Controllability by In-Context GenerationCode5
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language TasksCode5
Unified Multimodal Understanding and Generation Models: Advances, Challenges, and OpportunitiesCode5
SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant TransformersCode5
VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative ModelsCode5
Autoregressive Image Generation without Vector QuantizationCode5
Scalable Diffusion Models with TransformersCode5
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion ModelsCode5
Autoregressive Model Beats Diffusion: Llama for Scalable Image GenerationCode5
IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image GenerationCode5
Show-o: One Single Transformer to Unify Multimodal Understanding and GenerationCode5
EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal PromptsCode5
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You ThinkCode5
Improved Distribution Matching Distillation for Fast Image SynthesisCode5
Randomized Autoregressive Visual GenerationCode5
InstantCharacter: Personalize Any Characters with a Scalable Diffusion Transformer FrameworkCode5
Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion ModelsCode5
IMAGDressing-v1: Customizable Virtual DressingCode5
Consistency ModelsCode5
Image Vectorization: a ReviewCode5
Diffusion for World Modeling: Visual Details Matter in AtariCode5
PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image GenerationCode5
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual InversionCode5
GLIGEN: Open-Set Grounded Text-to-Image GenerationCode4
One Diffusion to Generate Them AllCode4
Null-text Inversion for Editing Real Images using Guided Diffusion ModelsCode4
Moûsai: Text-to-Music Generation with Long-Context Latent DiffusionCode4
OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion ModelsCode4
MIGC++: Advanced Multi-Instance Generation Controller for Image SynthesisCode4
MIGC: Multi-Instance Generation Controller for Text-to-Image SynthesisCode4
Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by StepCode4
Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image GenerationCode4
Ming-Lite-Uni: Advancements in Unified Architecture for Natural Multimodal InteractionCode4
Fine-Tuning Image-Conditional Diffusion Models is Easier than You ThinkCode4
Elucidating the Design Space of Diffusion-Based Generative ModelsCode4
Guiding a Diffusion Model with a Bad Version of ItselfCode4
Ming-Omni: A Unified Multimodal Model for Perception and GenerationCode4
Open-MAGVIT2: An Open-Source Project Toward Democratizing Auto-regressive Visual GenerationCode4
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and EditingCode4
Long-CLIP: Unlocking the Long-Text Capability of CLIPCode4
Diffusion Model-Based Image Editing: A SurveyCode4
Show:102550
← PrevPage 2 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified