SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 701750 of 6689 papers

TitleStatusHype
Pix2NeRF: Unsupervised Conditional p-GAN for Single Image to Neural Radiance Fields TranslationCode2
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion ModelsCode2
Compositional Transformers for Scene GenerationCode2
Attention Mechanisms in Computer Vision: A SurveyCode2
SDEdit: Guided Image Synthesis and Editing with Stochastic Differential EquationsCode2
CogView: Mastering Text-to-Image Generation via TransformersCode2
GAN Prior Embedded Network for Blind Face Restoration in the WildCode2
Diffusion Models Beat GANs on Image SynthesisCode2
Beyond Self-attention: External Attention using Two Linear Layers for Visual TasksCode2
On Aliased Resizing and Surprising Subtleties in GAN EvaluationCode2
ReStyle: A Residual-Based StyleGAN Encoder via Iterative RefinementCode2
Generative Adversarial TransformersCode2
Monster Mash: A Single-View Approach to Casual 3D Modeling and AnimationCode2
StyleSpace Analysis: Disentangled Controls for StyleGAN Image GenerationCode2
GIRAFFE: Representing Scenes as Compositional Generative Neural Feature FieldsCode2
Liquid Warping GAN with Attention: A Unified Framework for Human Image SynthesisCode2
Denoising Diffusion Implicit ModelsCode2
Rethinking Attention with PerformersCode2
Contrastive Learning for Unpaired Image-to-Image TranslationCode2
Closed-Form Factorization of Latent Semantics in GANsCode2
Denoising Diffusion Probabilistic ModelsCode2
Differentiable Augmentation for Data-Efficient GAN TrainingCode2
Improved Techniques for Training Score-Based Generative ModelsCode2
Training Generative Adversarial Networks with Limited DataCode2
Adversarial Latent AutoencodersCode2
GANSpace: Discovering Interpretable GAN ControlsCode2
Learning Implicit Surface Light FieldsCode2
GAN Compression: Efficient Architectures for Interactive Conditional GANsCode2
Reformer: The Efficient TransformerCode2
Interpreting the Latent Space of GANs for Semantic Face EditingCode2
Generative Modeling by Estimating Gradients of the Data DistributionCode2
Joint Discriminative and Generative Learning for Person Re-identificationCode2
A Style-Based Generator Architecture for Generative Adversarial NetworksCode2
Differentiable Image ParameterizationsCode2
Pose-Normalized Image Generation for Person Re-identificationCode2
Progressive Growing of GANs for Improved Quality, Stability, and VariationCode2
Unsupervised Cross-Domain Image GenerationCode2
NeoBabel: A Multilingual Open Tower for Visual GenerationCode1
SV-DRR: High-Fidelity Novel View X-Ray Synthesis Using Diffusion ModelCode1
CycleVAR: Repurposing Autoregressive Model for Unsupervised One-Step Image TranslationCode1
Morse: Dual-Sampling for Lossless Acceleration of Diffusion ModelsCode1
Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion ModelCode1
Noise Conditional Variational Score DistillationCode1
Diffuse Everything: Multimodal Diffusion Models on Arbitrary State SpacesCode1
Rethinking Machine Unlearning in Image Generation ModelsCode1
Unleashing High-Quality Image Generation in Diffusion Sampling Using Second-Order Levenberg-Marquardt-LangevinCode1
Draw ALL Your Imagine: A Holistic Benchmark and Agent Framework for Complex Instruction-based Image GenerationCode1
Hierarchical Masked Autoregressive Models with Low-Resolution Token PivotsCode1
Multimodal LLM-Guided Semantic Correction in Text-to-Image DiffusionCode1
STRICT: Stress Test of Rendering Images Containing TextCode1
Show:102550
← PrevPage 15 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified