SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 601650 of 6689 papers

TitleStatusHype
TaleCrafter: Interactive Story Visualization with Multiple CharactersCode2
Conditional Diffusion Models for Semantic 3D Brain MRI SynthesisCode2
Accelerating Text-to-Image Editing via Cache-Enabled Sparse Diffusion InferenceCode2
Generating Images with Multimodal Language ModelsCode2
Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion ModelsCode2
LayoutGPT: Compositional Visual Planning and Generation with Large Language ModelsCode2
LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language ModelsCode2
Control-A-Video: Controllable Text-to-Video Diffusion Models with Motion Prior and Reward Feedback LearningCode2
Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free ApproachCode2
ControlVideo: Training-free Controllable Text-to-Video GenerationCode2
UniControl: A Unified Diffusion Model for Controllable Visual Generation In the WildCode2
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language ModelCode2
OpenShape: Scaling Up 3D Shape Representation Towards Open-World UnderstandingCode2
FastComposer: Tuning-Free Multi-Subject Image Generation with Localized AttentionCode2
Denoising Diffusion Models for Plug-and-Play Image RestorationCode2
Diffusion Explainer: Visual Explanation for Text-to-image Stable DiffusionCode2
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image GenerationCode2
MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and EditingCode2
Expressive Text-to-Image Generation with Rich TextCode2
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and ReconstructionCode2
Diffusion Recommender ModelCode2
HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image GenerationCode2
Slideflow: Deep Learning for Digital Histopathology with Real-Time Whole-Slide VisualizationCode2
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
LayoutDiffusion: Controllable Diffusion Model for Layout-to-image GenerationCode2
Your Diffusion Model is Secretly a Zero-Shot ClassifierCode2
Anti-DreamBooth: Protecting users from personalized text-to-image synthesisCode2
MDTv2: Masked Diffusion Transformer is a Strong Image SynthesizerCode2
SVDiff: Compact Parameter Space for Diffusion Fine-TuningCode2
DiffIR: Efficient Diffusion Model for Image RestorationCode2
3DGen: Triplane Latent Diffusion for Textured Mesh GenerationCode2
Video-P2P: Video Editing with Cross-attention ControlCode2
Video-P2P: Video Editing with Cross-attention ControlCode2
Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial ScenesCode2
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image GenerationCode2
UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion ModelsCode2
Q-Diffusion: Quantizing Diffusion ModelsCode2
PFGM++: Unlocking the Potential of Physics-Inspired Generative ModelsCode2
Geometry-Complete Diffusion for 3D Molecule Generation and OptimizationCode2
Generative Diffusion Models on Graphs: Methods and ApplicationsCode2
Mixture of Diffusers for scene composition and high resolution image generationCode2
TEXTure: Text-Guided Texturing of 3D ShapesCode2
GALIP: Generative Adversarial CLIPs for Text-to-Image SynthesisCode2
Image Restoration with Mean-Reverting Stochastic Differential EquationsCode2
Simple diffusion: End-to-end diffusion for high resolution imagesCode2
normflows: A PyTorch Package for Normalizing FlowsCode2
Muse: Text-To-Image Generation via Masked Generative TransformersCode2
eVAE: Evolutionary Variational AutoencoderCode2
Character-Aware Models Improve Visual Text RenderingCode2
Diffusion Probabilistic Models beat GANs on Medical ImagesCode2
Show:102550
← PrevPage 13 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified