SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 38513875 of 6689 papers

TitleStatusHype
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation0
An Improved Composite Functional Gradient Learning by Wasserstein Regularization for Generative adversarial networks0
Understanding Attention for Vision-and-Language Tasks0
One-Shot Generalization in Deep Generative Models0
Understanding Diffusion Models: A Unified Perspective0
Understanding Diffusion Models by Feynman's Path Integral0
One-step Diffusion Models with f-Divergence Distribution Matching0
An Impartial Transformer for Story Visualization0
3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation0
One-Way Ticket: Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models0
On Fairness of Unified Multimodal Large Language Model for Image Generation0
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation0
On Geometrical Properties of Text Token Embeddings for Strong Semantic Binding in Text-to-Image Generation0
On Improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models0
Understanding Subjectivity through the Lens of Motivational Context in Model-Generated Image Satisfaction0
Only-Style: Stylistic Consistency in Image Generation without Content Leakage0
Understanding the Limitations of Diffusion Concept Algebra Through Food0
Anime Style Space Exploration Using Metric Learning and Generative Adversarial Networks0
On Suppressing Range of Adaptive Stepsizes of Adam to Improve Generalisation Performance0
On Synthetic Texture Datasets: Challenges, Creation, and Curation0
On the Adversarial Robustness of Generative Autoencoders in the Latent Space0
On the Design of Diffusion-based Neural Speech Codecs0
On The Distribution of Penultimate Activations of Classification Networks0
Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP0
AniMer: Animal Pose and Shape Estimation Using Family Aware Transformer0
Show:102550
← PrevPage 155 of 268Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified