SOTAVerified

Image Generation

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Papers

Showing 40014050 of 6689 papers

TitleStatusHype
WordStylist: Styled Verbatim Handwritten Text Generation with Latent Diffusion ModelsCode1
SC-VAE: Sparse Coding-based Variational Autoencoder with Learned ISTACode0
MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion PathCode1
SynthRAD2023 Grand Challenge dataset: generating synthetic CT for radiotherapyCode1
DDMM-Synth: A Denoising Diffusion Model for Cross-modal Medical Image Synthesis with Sparse-view Measurement Embedding0
Your Diffusion Model is Secretly a Zero-Shot ClassifierCode2
Fully Hyperbolic Convolutional Neural Networks for Computer VisionCode1
Variational Distribution Learning for Unsupervised Text-to-Image Generation0
Mask and Restore: Blind Backdoor Defense at Test Time with Masked AutoencoderCode0
Memory-Efficient 3D Denoising Diffusion Models for Medical Image ProcessingCode1
Anti-DreamBooth: Protecting users from personalized text-to-image synthesisCode2
Object-Centric Relational Representations for Image GenerationCode0
Learning Versatile 3D Shape Generation with Improved AR Models0
Joint fMRI Decoding and Encoding with Latent Embedding Alignment0
Learning Generative Models with Goal-conditioned Reinforcement Learning0
BlobGAN-3D: A Spatially-Disentangled 3D-Aware Generative Model for Indoor Scenes0
MDTv2: Masked Diffusion Transformer is a Strong Image SynthesizerCode2
Spatial Latent Representations in Generative Adversarial Networks for Image Generation0
Freestyle Layout-to-Image SynthesisCode1
Indonesian Text-to-Image Synthesis with Sentence-BERT and FastGANCode0
Causal Image Synthesis of Brain MR in 3D0
Efficient Scale-Invariant Generator with Column-Row Entangled Pixel SynthesisCode1
UrbanGIRAFFE: Representing Urban Scenes as Compositional Generative Neural Feature FieldsCode1
CoLa-Diff: Conditional Latent Diffusion Model for Multi-Modal MRI SynthesisCode1
Factor Decomposed Generative Adversarial Networks for Text-to-Image Synthesis0
High Fidelity Image Synthesis With Deep VAEs In Latent SpaceCode1
End-to-End Diffusion Latent Optimization Improves Classifier GuidanceCode1
Medical diffusion on a budget: Textual Inversion for medical image generationCode1
CoBIT: A Contrastive Bi-directional Image-Text Generation Model0
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video GeneratorsCode4
PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360^Code3
Set-the-Scene: Global-Local Training for Generating Controllable NeRF ScenesCode1
Explore the Power of Synthetic Data on Few-shot Object Detection0
MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models0
VecFontSDF: Learning to Reconstruct and Synthesize High-quality Vector Fonts via Signed Distance FunctionsCode0
NeRF-GAN Distillation for Efficient 3D-Aware Generation with ConvolutionsCode1
Feature-Conditioned Cascaded Video Diffusion Models for Precise Echocardiogram SynthesisCode1
MAGVLT: Masked Generative Vision-and-Language TransformerCode1
Affordance Diffusion: Synthesizing Hand-Object Interactions0
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question AnsweringCode1
CoopInit: Initializing Generative Adversarial Networks via Cooperative Learning0
DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion ModelsCode0
Object-Centric Slot DiffusionCode1
Polynomial Implicit Neural Representations For Large Diverse DatasetsCode1
NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using Diffusion Models0
Discovering Interpretable Directions in the Semantic Latent Space of Diffusion ModelsCode1
SVDiff: Compact Parameter Space for Diffusion Fine-TuningCode2
Localizing Object-level Shape Variations with Text-to-Image Diffusion ModelsCode1
Picture that Sketch: Photorealistic Image Generation from Abstract Sketches0
Less is More: Unsupervised Mask-guided Annotated CT Image Synthesis with Minimum Manual Segmentations0
Show:102550
← PrevPage 81 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Improved DDPMFID12.3Unverified
2ADMFID11.84Unverified
3BigGAN-deepFID8.1Unverified
4Polarity-BigGANFID6.82Unverified
5VQGAN+Transformer (k=mixed, p=1.0, a=0.005)FID6.59Unverified
6MaskGITFID6.18Unverified
7VQGAN+Transformer (k=600, p=1.0, a=0.05)FID5.2Unverified
8CDMFID4.88Unverified
9ADM-GFID4.59Unverified
10RINFID4.51Unverified
#ModelMetricClaimedVerifiedStatus
1PresGANFID52.2Unverified
2RESFLOWFID48.29Unverified
3Residual FlowFID46.37Unverified
4GLF+perceptual loss (ours)FID44.6Unverified
5ProdPoly no activation functionsFID40.45Unverified
6ProdPoly no activation functionsFID36.77Unverified
7ACGANFID35.47Unverified
8DenseFlow-74-10FID34.9Unverified
9NVAE w/ flowFID32.53Unverified
10QSNGANFID31.97Unverified
#ModelMetricClaimedVerifiedStatus
1GLIDE + CLSFID30.87Unverified
2GLIDE + CLIPFID30.46Unverified
3GLIDE + CLS-FREEFID29.22Unverified
4GLIDE + CLIP + CLS + CLS-FREEFID29.18Unverified
5PGMGANFID21.73Unverified
6CLR-GANFID20.27Unverified
7FMFID14.45Unverified
8CT (Direct Generation, NFE=1)FID13Unverified
9CT (Direct Generation, NFE=2)FID11.1Unverified
10GLIDE +CLSKID7.95Unverified