SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 42514300 of 6689 papers

TitleStatusHype
Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers0
LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model0
Layered Diffusion Model for One-Shot High Resolution Text-to-Image Synthesis0
LayerFusion: Harmonized Multi-Layer Text-to-Image Generation with Generative Priors0
LayeringDiff: Layered Image Synthesis via Generation, then Disassembly with Generative Knowledge0
Layer Separation: Adjustable Joint Space Width Images Synthesis in Conventional Radiography0
A Generic Shared Attention Mechanism for Various Backbone Neural Networks0
Layout-and-Retouch: A Dual-stage Framework for Improving Diversity in Personalized Image Generation0
Layout-Bridging Text-to-Image Synthesis0
Layout Control and Semantic Guidance with Attention Loss Backward for T2I Diffusion Model0
LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation0
Assessing a Single Image in Reference-Guided Image Synthesis0
Towards Understanding and Quantifying Uncertainty for Text-to-Image Generation0
Layout-to-Image Generation with Localized Descriptions using ControlNet with Cross-Attention Control0
LayoutTransformer: Relation-Aware Scene Layout Generation0
Lay-Your-Scene: Natural Scene Layout Generation with Diffusion Transformers0
A spatiotemporal style transfer algorithm for dynamic visual stimulus generation0
LDC-VAE: A Latent Distribution Consistency Approach to Variational AutoEncoders0
LDEdit: Towards Generalized Text Guided Image Manipulation via Latent Diffusion Models0
LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation0
LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights0
Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers0
Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing0
Learning 3D-aware Image Synthesis with Unknown Pose Distribution0
Learning 3D Robotics Perception using Inductive Priors0
Accurate generation of stochastic dynamics based on multi-model Generative Adversarial Networks0
Learning AND-OR Templates for Professional Photograph Parsing and Guidance0
One-to-one Mapping for Unpaired Image-to-image Translation0
Towards Understanding the Generative Capability of Adversarially Robust Classifiers0
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition0
Learning Compositional Visual Concepts with Mutual Consistency0
Control3Diff: Learning Controllable 3D Diffusion Models from Single-view Images0
Learning Coupled Dictionaries from Unpaired Data for Image Super-Resolution0
Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent Portrait Synthesis from Monocular Image0
Private Gradient Estimation is Useful for Generative Modeling0
Learning Diffusion Texture Priors for Image Restoration0
Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation0
Towards Understanding the Mechanisms of Classifier-Free Guidance0
Learning Disentangled Representations with Reference-Based Variational Autoencoders0
Learning Dynamic Style Kernels for Artistic Style Transfer0
Diffusion Models for Accurate Channel Distribution Generation0
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling0
Learning Energy-based Model via Dual-MCMC Teaching0
Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality0
Wavelets Are All You Need for Autoregressive Image Generation0
A Simple Background Augmentation Method for Object Detection with Diffusion Model0
Learning from THEODORE: A Synthetic Omnidirectional Top-View Indoor Dataset for Deep Transfer Learning0
Learning Generative Models with Goal-conditioned Reinforcement Learning0
Learning geometry-image representation for 3D point cloud generation0
A Simple Approach to Unifying Diffusion-based Conditional Generation0
Show:102550
← PrevPage 86 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified