SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 41014150 of 6689 papers

TitleStatusHype
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems0
How to Construct Energy for Images? Denoising Autoencoder Can Be Energy Based Model0
Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial ScenesCode2
Few-Shot Defect Image Generation via Defect-Aware Feature ManipulationCode1
Diffusion Models Generate Images Like Painters: an Analytical Theory of Outline First, Details Later0
Bi-parametric prostate MR image synthesis using pathology and sequence-conditioned stable diffusion0
Dense Pixel-to-Pixel Harmonization via Continuous Image RepresentationCode1
A Complete Recipe for Diffusion Generative ModelsCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
Interactive Text Generation0
Consistency ModelsCode5
X&Fuse: Fusing Visual Information in Text-to-Image Generation0
Continuous-Time Functional Diffusion ProcessesCode1
Unlimited-Size Diffusion RestorationCode3
Single Image Backdoor Inversion via Robust Smoothed ClassifiersCode1
Collage Diffusion0
StraIT: Non-autoregressive Generation with Stratified Image Transformer0
Dissolving Is Amplifying: Towards Fine-Grained Anomaly DetectionCode1
DEff-GAN: Diverse Attribute Transfer for Few-Shot Image SynthesisCode0
Semantically Consistent Person Image Generation0
Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?0
Monocular Depth Estimation using Diffusion Models0
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image GenerationCode2
Differentially Private Diffusion Models Generate Useful Synthetic Images0
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP for Generic Natural Visual Stimulus DecodingCode1
Modulating Pretrained Diffusion Models for Multimodal Image Synthesis0
"An Adapt-or-Die Type of Situation": Perception, Adoption, and Use of Text-To-Image-Generation AI by Game Industry Professionals0
Text Semantics to Image Generation: A method of building facades design base on Stable Diffusion model0
Improved Training of Mixture-of-Experts Language GANs0
Controlled and Conditional Text to Image Generation with Diffusion Prior0
ArtiFact: A Large-Scale Dataset with Artificial and Factual Images for Generalizable and Robust Synthetic Image DetectionCode1
Teaching CLIP to Count to TenCode1
Aligning Text-to-Image Models using Human Feedback0
Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMCCode1
Gradient Adjusting Networks for Domain InversionCode0
TherapyView: Visualizing Therapy Sessions with Temporal Topic Modeling and AI-Generated Arts0
Unpaired Translation from Semantic Label Maps to Images by Leveraging Domain-Specific Simulations0
Prompt Stealing Attacks Against Text-to-Image Generation ModelsCode1
Simple U-net Based Synthetic Polyp Image Generation: Polyp to Negative and Negative to Polyp0
Affect-Conditioned Image GenerationCode0
Composer: Creative and Controllable Image Synthesis with Composable ConditionsCode3
RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions0
Redes Generativas Adversarias (GAN) Fundamentos Teóricos y Aplicaciones0
Combining Generative Artificial Intelligence (AI) and the Internet: Heading towards Evolution or Degradation?0
Transformer-based Generative Adversarial Networks in Computer Vision: A Comprehensive Survey0
Fine-grained Cross-modal Fusion based Refinement for Text-to-Image SynthesisCode0
Paint it Black: Generating paintings from text descriptions0
Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales0
LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation0
TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual Vision Transformer for Fast Arbitrary One-Shot Image Generation0
Show:102550
← PrevPage 83 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified