SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 27512775 of 6689 papers

TitleStatusHype
Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion ModelsCode3
ColorSwap: A Color and Word Order Dataset for Multimodal EvaluationCode1
ChatScratch: An AI-Augmented System Toward Autonomous Visual Programming Learning for Children Aged 6-120
Noise Map Guidance: Inversion with Spatial Context for Real Image EditingCode1
Text2Street: Controllable Text-to-image Generation for Street Views0
Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!Code0
FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution0
QuEST: Low-bit Diffusion Model Quantization via Efficient Selective FinetuningCode2
Training-Free Consistent Text-to-Image GenerationCode2
Do Diffusion Models Learn Semantically Meaningful and Efficient Representations?0
InstanceDiffusion: Instance-level Control for Image GenerationCode4
IGUANe: a 3D generalizable CycleGAN for multicenter harmonization of brain MR imagesCode1
M^3Face: A Unified Multi-Modal Multilingual Framework for Human Face Generation and Editing0
DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image EditingCode4
Separable Multi-Concept Erasure from Diffusion Models0
Diffusion Cross-domain RecommendationCode1
Risk-Sensitive Diffusion: Robustly Optimizing Diffusion Models with Noisy Samples0
Variational Quantum Circuits Enhanced Generative Adversarial Network0
Mobile Fitting Room: On-device Virtual Try-on via Diffusion Models0
Neural Language of Thought Models0
On the Multi-modal Vulnerability of Diffusion ModelsCode1
Can Shape-Infused Joint Embeddings Improve Image-Conditioned 3D Diffusion?0
Can MLLMs Perform Text-to-Image In-Context Learning?Code1
Cross-view Masked Diffusion Transformers for Person Image SynthesisCode2
Unconditional Latent Diffusion Models Memorize Patient Imaging Data: Implications for Openly Sharing Synthetic DataCode0
Show:102550
← PrevPage 111 of 268Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified