SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 651700 of 6689 papers

TitleStatusHype
Generating Images with Multimodal Language ModelsCode2
Generative AI for Character Animation: A Comprehensive Survey of Techniques, Applications, and Future DirectionsCode2
aMUSEd: An Open MUSE ReproductionCode2
Generative Enhancement for 3D Medical ImagesCode2
GenStereo: Towards Open-World Generation of Stereo Images and Unsupervised MatchingCode2
Harmonizing Visual Representations for Unified Multimodal Understanding and GenerationCode2
GANSpace: Discovering Interpretable GAN ControlsCode2
GAN Prior Embedded Network for Blind Face Restoration in the WildCode2
GAUDI: A Neural Architect for Immersive 3D Scene GenerationCode2
Denoising Diffusion Models for Plug-and-Play Image RestorationCode2
GALIP: Generative Adversarial CLIPs for Text-to-Image SynthesisCode2
Denoising Diffusion Bridge ModelsCode2
GAN Compression: Efficient Architectures for Interactive Conditional GANsCode2
Gaussian Mixture Flow Matching ModelsCode2
Denoising Diffusion Probabilistic ModelsCode2
Bayesian Flow NetworksCode2
FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept CompositionCode2
From Parts to Whole: A Unified Reference Framework for Controllable Human Image GenerationCode2
Muddit: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion ModelCode2
Beyond Self-attention: External Attention using Two Linear Layers for Visual TasksCode2
Fréchet Video Motion Distance: A Metric for Evaluating Motion Consistency in VideosCode2
From Text to Pose to Image: Improving Diffusion Model Control and QualityCode2
Gen4Gen: Generative Data Pipeline for Generative Multi-Concept CompositionCode2
Flux Already Knows -- Activating Subject-Driven Image Generation without TrainingCode2
Detecting, Explaining, and Mitigating Memorization in Diffusion ModelsCode2
DetailFlow: 1D Coarse-to-Fine Autoregressive Image Generation via Next-Detail PredictionCode2
Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous TokensCode2
MVControl: Adding Conditional Control to Multi-view Diffusion for Controllable Text-to-3D GenerationCode2
Character-Aware Models Improve Visual Text RenderingCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
Analyzing and Improving the Training Dynamics of Diffusion ModelsCode2
CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image GenerationCode2
Hybrid Fourier Score Distillation for Efficient One Image to 3D Object GenerationCode2
CharaConsist: Fine-Grained Consistent Character GenerationCode2
Flow Matching in Latent SpaceCode2
Character-Adapter: Prompt-Guided Region Control for High-Fidelity Character CustomizationCode2
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory MatchingCode2
FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity RefinerCode2
FouriScale: A Frequency Perspective on Training-Free High-Resolution Image SynthesisCode2
Differentiable Image ParameterizationsCode2
GenAI Arena: An Open Evaluation Platform for Generative ModelsCode2
Flow-Anchored Consistency ModelsCode2
Causal Diffusion Transformers for Generative ModelingCode2
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow MatchingCode2
A Style-Based Generator Architecture for Generative Adversarial NetworksCode2
FlexVAR: Flexible Visual Autoregressive Modeling without Residual PredictionCode2
Flow-Guided Diffusion for Video InpaintingCode2
Fixed Point Diffusion ModelsCode2
CapHuman: Capture Your Moments in Parallel UniversesCode2
Financial Fine-tuning a Large Time Series ModelCode2
Show:102550
← PrevPage 14 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified