SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 11011150 of 6689 papers

TitleStatusHype
EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion ModelsCode1
Plug-in Diffusion Model for Sequential RecommendationCode1
New Job, New Gender? Measuring the Social Bias in Image Generation ModelsCode1
DiG-IN: Diffusion Guidance for Investigating Networks - Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual ExplanationsCode1
Generating Handwritten Mathematical Expressions From Symbol Graphs: An End-to-End PipelineCode1
ZONE: Zero-Shot Instruction-Guided Local EditingCode1
Bellman Optimal Stepsize Straightening of Flow-Matching ModelsCode1
Cross Initialization for Personalized Text-to-Image GenerationCode1
One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing ApplicationsCode1
SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven GenerationCode1
Fréchet Wavelet Distance: A Domain-Agnostic Metric for Image GenerationCode1
VIEScore: Towards Explainable Metrics for Conditional Image Synthesis EvaluationCode1
Fast Diffusion-Based Counterfactuals for Shortcut Removal and GenerationCode1
Diffusion Models With Learned Adaptive NoiseCode1
Decoupled Textual Embeddings for Customized Image GenerationCode1
Brush Your Text: Synthesize Any Scene Text on Images via Diffusion ModelCode1
Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion ModelCode1
Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion ModelsCode1
DeepCalliFont: Few-shot Chinese Calligraphy Font Synthesis by Integrating Dual-modality Generative ModelsCode1
Topic-VQ-VAE: Leveraging Latent Codebooks for Flexible Topic-Guided Document GenerationCode1
Rich Human Feedback for Text-to-Image GenerationCode1
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and GenerationCode1
Clockwork Diffusion: Efficient Generation With Model-Step DistillationCode1
The Lottery Ticket Hypothesis in Denoising: Towards Semantic-Driven InitializationCode1
SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion ModelsCode1
Diffusion-based Blind Text Image Super-ResolutionCode1
Harnessing LLM to Attack LLM-Guarded Text-to-Image ModelsCode1
How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary InvestigationCode1
Diffusion Cocktail: Mixing Domain-Specific Diffusion Models for Diversified Image GenerationsCode1
Learned representation-guided diffusion models for large-image generationCode1
UIEDP:Underwater Image Enhancement with Diffusion PriorCode1
Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance ScaleCode1
Correcting Diffusion Generation through ResamplingCode1
Investigating the Design Space of Diffusion Models for Speech EnhancementCode1
TokenCompose: Text-to-Image Diffusion with Token-level SupervisionCode1
Diversified in-domain synthesis with efficient fine-tuning for few-shot classificationCode1
ViscoNet: Bridging and Harmonizing Visual and Textual Conditioning for ControlNetCode1
BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion ModelsCode1
GeNIe: Generative Hard Negative Images Through DiffusionCode1
GIVT: Generative Infinite-Vocabulary TransformersCode1
Fully Spiking Denoising Diffusion Implicit ModelsCode1
Meta ControlNet: Enhancing Task Adaptation via Meta LearningCode1
Rethinking FID: Towards a Better Evaluation Metric for Image GenerationCode1
CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion ModelCode1
ElasticDiffusion: Training-free Arbitrary Size Image Generation through Global-Local Content SeparationCode1
M^2Chat: Empowering VLM for Multimodal LLM Interleaved Text-Image GenerationCode1
DiG-IN: Diffusion Guidance for Investigating Networks -- Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual ExplanationsCode1
When StyleGAN Meets Stable Diffusion: a W_+ Adapter for Personalized Image GenerationCode1
SODA: Bottleneck Diffusion Models for Representation LearningCode1
Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using Diffusion ModelsCode1
Show:102550
← PrevPage 23 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified