SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 19011925 of 6689 papers

TitleStatusHype
GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image GenerationCode0
-Brush: Controllable Large Image Synthesis with Diffusion Models in Infinite DimensionsCode0
CoCoG-2: Controllable generation of visual stimuli for understanding human concept representationCode0
Are handcrafted filters helpful for attributing AI-generated images?0
Time Series Generative Learning with Application to Brain Imaging Analysis0
Panoptic Segmentation of Mammograms with Text-To-Image Diffusion Model0
Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and EvaluationsCode1
Controllable and Efficient Multi-Class Pathology Nuclei Data Augmentation using Text-Conditioned Diffusion Models0
SUSTechGAN: Image Generation for Object Detection in Adverse Conditions of Autonomous DrivingCode0
Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking0
URCDM: Ultra-Resolution Image Synthesis in HistopathologyCode0
Image Inpainting Models are Effective Tools for Instruction-guided Image Editing0
Training-free Composite Scene Generation for Layout-to-Image SynthesisCode1
Latent Diffusion for Medical Image Segmentation: End to end learning for fast sampling and accuracyCode1
Promptable Counterfactual Diffusion Model for Unified Brain Tumor Segmentation and Generation with MRIsCode0
From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases0
GeoGuide: Geometric guidance of diffusion modelsCode0
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Attribution Maps0
Voltage-Controlled Magnetoelectric Devices for Neuromorphic Diffusion Process0
The Fabrication of Reality and Fantasy: Scene Generation with LLM-Assisted Prompt Interpretation0
Zero-shot Text-guided Infinite Image Synthesis with LLM guidance0
IMAGDressing-v1: Customizable Virtual DressingCode5
Towards Understanding Unsafe Video GenerationCode0
How Control Information Influences Multilingual Text Image Generation and Editing?Code0
Scaling Diffusion Transformers to 16 Billion ParametersCode3
Show:102550
← PrevPage 77 of 268Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified