SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 28012850 of 6689 papers

TitleStatusHype
Deconstructing Denoising Diffusion Models for Self-Supervised LearningCode2
CreativeSynth: Cross-Art-Attention for Artistic Image Synthesis with Multimodal DiffusionCode1
UrbanGenAI: Reconstructing Urban Landscapes using Panoptic Segmentation and Diffusion Models0
Explicitly Representing Syntax Improves Sentence-to-layout Prediction of Unexpected SituationsCode0
Image Synthesis with Graph Conditioning: CLIP-Guided Diffusion Models for Scene Graphs0
No Longer Trending on Artstation: Prompt Analysis of Generative AI Art0
UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion0
Faster Projected GAN: Towards Faster Few-Shot Image Generation0
CIMGEN: Controlled Image Manipulation by Finetuning Pretrained Generative Models on Limited Data0
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural RepresentationsCode1
Codebook-enabled Generative End-to-end Semantic Communication Powered by Transformer0
Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMsCode5
Text-to-Image Cross-Modal Generation: A Systematic Review0
Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion TransformersCode7
Large-scale Reinforcement Learning for Diffusion Models0
Diffusion Model Conditioning on Gaussian Mixture Model and Negative Gaussian Mixture Gradient0
CLIP Model for Images to Textual Prompts Based on Top-k Neighbors0
DiffusionGPT: LLM-Driven Text-to-Image Generation System0
Efficient generative adversarial networks using linear additive-attention TransformersCode0
MITS-GAN: Safeguarding Medical Imaging from Tampering with Generative Adversarial NetworksCode1
Compose and Conquer: Diffusion-Based 3D Depth Aware Composable Image SynthesisCode2
Adversarial Supervision Makes Layout-to-Image Diffusion Models ThriveCode2
SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant TransformersCode5
Instilling Multi-round Thinking to Text-guided Image Generation0
Fixed Point Diffusion ModelsCode2
Efficient4D: Fast Dynamic 3D Object Generation from a Single-view VideoCode2
Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal DataCode1
Revealing Vulnerabilities in Stable Diffusion via Targeted AttacksCode1
Key-point Guided Deformable Image Manipulation Using Diffusion Model0
Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline and Jitter-Aware Restoration NetworkCode1
SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation0
InstantID: Zero-shot Identity-Preserving Generation in SecondsCode11
HieraFashDiff: Hierarchical Fashion Design with Multi-stage Diffusion ModelsCode1
Generation of Synthetic Images for Pedestrian Detection Using a Sequence of GANs0
Quantum Denoising Diffusion ModelsCode1
ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image GenerationCode0
Seek for Incantations: Towards Accurate Text-to-Image Diffusion Synthesis through Prompt Engineering0
Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision ApplicationsCode4
Scissorhands: Scrub Data Influence via Connection Sensitivity in NetworksCode0
Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation0
Frequency-Time Diffusion with Neural Cellular Automata0
Erasing Undesirable Influence in Diffusion ModelsCode1
AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks0
PIXART-δ: Fast and Controllable Image Generation with Latent Consistency ModelsCode7
Score Distillation Sampling with Learned Manifold Corrective0
Let's Go Shopping (LGS) -- Web-Scale Image-Text Dataset for Visual Concept Understanding0
Vision Reimagined: AI-Powered Breakthroughs in WiFi Indoor Imaging0
Content-Conditioned Generation of Stylized Free hand Sketches0
EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion ModelsCode1
EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models0
Show:102550
← PrevPage 57 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified