SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 651700 of 6689 papers

TitleStatusHype
DiffIR: Efficient Diffusion Model for Image RestorationCode2
Marrying Autoregressive Transformer and Diffusion with Multi-Reference AutoregressionCode2
aMUSEd: An Open MUSE ReproductionCode2
MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and EditingCode2
From Parts to Whole: A Unified Reference Framework for Controllable Human Image GenerationCode2
GAN Compression: Efficient Architectures for Interactive Conditional GANsCode2
Conditional Image Synthesis with Diffusion Models: A SurveyCode2
MasterWeaver: Taming Editability and Face Identity for Personalized Text-to-Image GenerationCode2
FouriScale: A Frequency Perspective on Training-Free High-Resolution Image SynthesisCode2
DiffiT: Diffusion Vision Transformers for Image GenerationCode2
MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further TuningCode2
Boosting Latent Diffusion with Flow MatchingCode2
Hybrid Fourier Score Distillation for Efficient One Image to 3D Object GenerationCode2
MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and EditingCode2
Fréchet Video Motion Distance: A Metric for Evaluating Motion Consistency in VideosCode2
Bayesian Flow NetworksCode2
Mixture of Diffusers for scene composition and high resolution image generationCode2
GAN Prior Embedded Network for Blind Face Restoration in the WildCode2
Differential Diffusion: Giving Each Pixel Its StrengthCode2
Blended Latent DiffusionCode2
Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous TokensCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity RefinerCode2
Flux Already Knows -- Activating Subject-Driven Image Generation without TrainingCode2
Flow Matching in Latent SpaceCode2
Differentiable Augmentation for Data-Efficient GAN TrainingCode2
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory MatchingCode2
Differentially Private Synthetic Data via APIs 3: Using Simulators Instead of Foundation ModelCode2
Differentiable Image ParameterizationsCode2
DiffMorpher: Unleashing the Capability of Diffusion Models for Image MorphingCode2
GANSpace: Discovering Interpretable GAN ControlsCode2
Muse: Text-To-Image Generation via Masked Generative TransformersCode2
DreamDiffusion: Generating High-Quality Images from Brain EEG SignalsCode2
MVControl: Adding Conditional Control to Multi-view Diffusion for Controllable Text-to-3D GenerationCode2
Flow-Anchored Consistency ModelsCode2
DreamBench++: A Human-Aligned Benchmark for Personalized Image GenerationCode2
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow MatchingCode2
NoiseCollage: A Layout-Aware Text-to-Image Diffusion Model Based on Noise Cropping and MergingCode2
A Style-Based Generator Architecture for Generative Adversarial NetworksCode2
No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by ThemselvesCode2
FlexVAR: Flexible Visual Autoregressive Modeling without Residual PredictionCode2
Flow-Guided Diffusion for Video InpaintingCode2
Fixed Point Diffusion ModelsCode2
Accelerating Text-to-Image Editing via Cache-Enabled Sparse Diffusion InferenceCode2
A Novel Sampling Scheme for Text- and Image-Conditional Image Synthesis in Quantized Latent SpacesCode2
Control-A-Video: Controllable Text-to-Video Diffusion Models with Motion Prior and Reward Feedback LearningCode2
Fast ODE-based Sampling for Diffusion Models in Around 5 StepsCode2
Financial Fine-tuning a Large Time Series ModelCode2
Causal Diffusion Transformers for Generative ModelingCode2
BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation CapabilitiesCode2
Show:102550
← PrevPage 14 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified