SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 16511700 of 6689 papers

TitleStatusHype
Advancing Video Quality Assessment for AIGC0
DepthART: Monocular Depth Estimation as Autoregressive Refinement Task0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
VLEU: a Method for Automatic Evaluation for Generalizability of Text-to-Image ModelsCode1
EDGE-Rec: Efficient and Data-Guided Edge Diffusion For Recommender Systems Graphs0
DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation0
Implicit Dynamical Flow Fusion (IDFF) for Generative ModelingCode0
LatentQGAN: A Hybrid QGAN with Classical Convolutional Autoencoder0
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image GenerationCode0
Recovering Global Data Distribution Locally in Federated Learning0
BrainDreamer: Reasoning-Coherent and Controllable Image Generation from EEG Brain Signals via Language Guidance0
Imagine yourself: Tuning-Free Personalized Image Generation0
Efficient Visualization of Neural Networks with Generative Models and Adversarial Perturbations0
HSIGene: A Foundation Model For Hyperspectral Image GenerationCode2
StoryMaker: Towards Holistic Consistent Characters in Text-to-image GenerationCode4
Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings0
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Recommendation with Generative Models0
ChefFusion: Multimodal Foundation Model Integrating Recipe and Food Image GenerationCode0
RaggeDi: Diffusion-based State Estimation of Disordered Rags, Sheets, Towels and Blankets0
Tracking Any Point with Frame-Event Fusion Network at High Frame RateCode0
GUNet: A Graph Convolutional Network United Diffusion Model for Stable and Diversity Pose Generation0
Finding the Subjective Truth: Collecting 2 Million Votes for Comprehensive Gen-AI Model Evaluation0
Agglomerative Token Clustering0
Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion ModelsCode2
Using Physics Informed Generative Adversarial Networks to Model 3D porous media0
Fine-Tuning Image-Conditional Diffusion Models is Easier than You ThinkCode4
MM2Latent: Text-to-facial image generation and editing in GANs with multimodal assistanceCode1
OmniGen: Unified Image GenerationCode7
Improving the Efficiency of Visually Augmented Language ModelsCode0
2S-ODIS: Two-Stage Omni-Directional Image Synthesis by Geometric Distortion CorrectionCode0
On Synthetic Texture Datasets: Challenges, Creation, and Curation0
Robust image representations with counterfactual contrastive learningCode1
SimInversion: A Simple Framework for Inversion-Based Text-to-Image Editing0
VAE-QWGAN: Addressing Mode Collapse in Quantum GANs via Autoencoding Priors0
Cross-modality image synthesis from TOF-MRA to CTA using diffusion-based models0
MotionCom: Automatic and Motion-Aware Image Composition with LLM and Video Diffusion PriorCode0
One-Shot Learning for Pose-Guided Person Image Synthesis in the WildCode1
E-Commerce Inpainting with Mask Guidance in Controlnet for Reducing Overcompletion0
Generalizing Alignment Paradigm of Text-to-Image Generation with Preferences through f-divergence Minimization0
GRIN: Zero-Shot Metric Depth with Pixel-Level Diffusion0
Finetuning CLIP to Reason about Pairwise DifferencesCode1
Beta-Sigma VAE: Separating beta and decoder variance in Gaussian variational autoencoderCode0
Enhancing Privacy in ControlNet and Stable Diffusion via Split Learning0
GroundingBooth: Grounding Text-to-Image Customization0
InstantDrag: Improving Interactivity in Drag-based Image Editing0
High-Frequency Anti-DreamBooth: Robust Defense against Personalized Image SynthesisCode0
Scribble-Guided Diffusion for Training-free Text-to-Image GenerationCode1
Click2Mask: Local Editing with Dynamic Mask GenerationCode1
Improving Virtual Try-On with Garment-focused Diffusion ModelsCode1
Show:102550
← PrevPage 34 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified