SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 10011050 of 6689 papers

TitleStatusHype
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image SynthesisCode1
Improving Generation and Evaluation of Visual Stories via Semantic ConsistencyCode1
Brush Your Text: Synthesize Any Scene Text on Images via Diffusion ModelCode1
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image DetectorsCode1
From Face to Natural Image: Learning Real Degradation for Blind Image Super-ResolutionCode1
Improving the Speed and Quality of GAN by Adversarial TrainingCode1
Causal Inference via Style Transfer for Out-of-distribution GeneralisationCode1
Continual Learning of Diffusion Models with Generative DistillationCode1
A Simple and Robust Framework for Cross-Modality Medical Image Segmentation applied to Vision TransformersCode1
ColorSwap: A Color and Word Order Dataset for Multimodal EvaluationCode1
Freestyle Layout-to-Image SynthesisCode1
FreeGraftor: Training-Free Cross-Image Feature Grafting for Subject-Driven Text-to-Image GenerationCode1
Incorporating Visual Correspondence into Diffusion Model for Virtual Try-OnCode1
Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANsCode1
Bridging the Gap Between f-GANs and Wasserstein GANsCode1
Frame Interpolation with Consecutive Brownian Bridge DiffusionCode1
Combining Markov Random Fields and Convolutional Neural Networks for Image SynthesisCode1
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial NetsCode1
Bridging the Gap Between f-GANs and Wasserstein GANsCode1
Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image SynthesisCode1
Continuous Language Generative FlowCode1
InsertDiffusion: Identity Preserving Visualization of Objects through a Training-Free Diffusion ArchitectureCode1
FPGAN-Control: A Controllable Fingerprint Generator for Training with Synthetic DataCode1
FreCaS: Efficient Higher-Resolution Image Generation via Frequency-aware Cascaded SamplingCode1
Frequency Domain Image Translation: More Photo-realistic, Better Identity-preservingCode1
ForkGAN: Seeing into the Rainy NightCode1
Forward-only Diffusion Probabilistic ModelsCode1
Contextual Convolutional Neural NetworksCode1
Advancing Pose-Guided Image Synthesis with Progressive Conditional Diffusion ModelsCode1
Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion ModelsCode1
Continuous Speculative Decoding for Autoregressive Image GenerationCode1
Foreground-Background Separation through Concept Distillation from Generative Image Foundation ModelsCode1
An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image SynthesisCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability VolumesCode1
Frido: Feature Pyramid Diffusion for Complex Scene Image SynthesisCode1
GANalyzer: Analysis and Manipulation of GANs Latent Space for Controllable Face SynthesisCode1
Flow Contrastive Estimation of Energy-Based ModelsCode1
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP for Generic Natural Visual Stimulus DecodingCode1
Context-Aware Layout to Image Generation with Enhanced Object AppearanceCode1
Focal Frequency Loss for Image Reconstruction and SynthesisCode1
FlexiFilm: Long Video Generation with Flexible ConditionsCode1
FlexIT: Towards Flexible Semantic Image TranslationCode1
FlexDiT: Dynamic Token Density Control for Diffusion TransformerCode1
First Creating Backgrounds Then Rendering Texts: A New Paradigm for Visual Text BlendingCode1
BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion ModelsCode1
FLAME Diffuser: Wildfire Image Synthesis using Mask Guided DiffusionCode1
Finetuning CLIP to Reason about Pairwise DifferencesCode1
Diversity is Definitely Needed: Improving Model-Agnostic Zero-shot Classification via Stable DiffusionCode1
Content-Aware GAN CompressionCode1
Show:102550
← PrevPage 21 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified