SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 38013850 of 6689 papers

TitleStatusHype
An Ordinary Differential Equation Sampler with Stochastic Start for Diffusion Bridge Models0
UMFuse: Unified Multi View Fusion for Human Editing applications0
Unbiased General Annotated Dataset Generation0
When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization0
Uncertainty Quantification in Deep Learning for Safer Neuroimage Enhancement0
not-so-big-GAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution0
not-so-BigGAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution0
Novel Deep Learning Approach to Derive Cytokeratin Expression and Epithelium Segmentation from DAPI0
3D-free meets 3D priors: Novel View Synthesis from a Single Image with Pretrained Diffusion Guidance0
An Ordinal Diffusion Model for Generating Medical Images with Different Severity Levels0
NTIRE 2025 challenge on Text to Image Generation Model Quality Assessment0
NukesFormers: Unpaired Hyperspectral Image Generation with Non-Uniform Domain Alignment0
Anonymization Prompt Learning for Facial Privacy-Preserving Text-to-Image Generation0
Numerical Pruning for Efficient Autoregressive Models0
UNCERTAINTY QUANTIFICATION USING VARIATIONAL INFERENCE FOR BIOMEDICAL IMAGE SEGMENTATION0
Unconditional Scene Graph Generation0
NVS-MonoDepth: Improving Monocular Depth Prediction with Novel View Synthesis0
OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes0
ObjBlur: A Curriculum Learning Approach With Progressive Object-Level Blurring for Improved Layout-to-Image Generation0
Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control0
Object-Centric Image Generation from Layouts0
Object-Centric Image Generation with Factored Depths, Locations, and Appearances0
Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck0
Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding0
Object-level Visual Prompts for Compositional Image Generation0
Obj-GloVe: Scene-Based Contextual Object Embedding0
Uncovering Regional Defaults from Photorealistic Forests in Text-to-Image Generation with DALL-E 20
Not Just Text: Uncovering Vision Modality Typographic Threats in Image Generation Models0
OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation0
A Noise is Worth Diffusion Guidance0
Omni^2: Unifying Omnidirectional Image Generation and Editing in an Omni Model0
OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction0
OmniControlNet: Dual-stage Integration for Conditional Image Generation0
Where is the disease? Semi-supervised pseudo-normality synthesis from an abnormal image0
An Introduction to Image Synthesis with Generative Adversarial Nets0
An Interpretable Generative Model for Handwritten Digit Image Synthesis0
OmniPrism: Learning Disentangled Visual Concept for Image Generation0
OmniSSR: Zero-shot Omnidirectional Image Super-Resolution using Stable Diffusion Model0
OMR-Diffusion:Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Intent Understanding0
An Intermediate Fusion ViT Enables Efficient Text-Image Alignment in Diffusion Models0
On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis0
On Conditioning GANs to Hierarchical Ontologies0
On Conditioning the Input Noise for Controlled Image Generation with Diffusion Models0
An Initial Exploration of Default Images in Text-to-Image Generation0
One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion0
OneActor: Consistent Character Generation via Cluster-Conditioned Guidance0
One Communication Round is All It Needs for Federated Fine-Tuning Foundation Models0
One-dimensional Adapter to Rule Them All: Concepts Diffusion Models and Erasing Applications0
OneGAN: Simultaneous Unsupervised Learning of Conditional Image Generation, Foreground Segmentation, and Fine-Grained Clustering0
An Improved Method for Personalizing Diffusion Models0
Show:102550
← PrevPage 77 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified