SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 11511200 of 1878 papers

TitleStatusHype
Macroscopic Control of Text Generation for Image Captioning0
Diagnostic Captioning: A Survey0
Dual-Level Collaborative Transformer for Image CaptioningCode1
Self-Distillation for Few-Shot Image CaptioningCode1
VinVL: Revisiting Visual Representations in Vision-Language ModelsCode2
Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning0
Partial Off-Policy Learning: Balance Accuracy and Diversity for Human-Oriented Image Captioning0
Discovering Autoregressive Orderings with Variational InferenceCode1
CANVASEMB: Learning Layout Representation with Large-scale Pre-training for Graphic Design0
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive LearningCode0
Text-Free Image-to-Speech Synthesis Using Learned Segmental UnitsCode1
Detecting Hate Speech in Multi-modal MemesCode1
Neural Text Generation with Artificial Negative Examples0
SubICap: Towards Subword-informed Image Captioning0
WEmbSim: A Simple yet Effective Metric for Image Captioning0
Image to Bengali Caption Generation Using Deep CNN and Bidirectional Gated Recurrent Unit0
Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 ChallengeCode0
Alleviating Noisy Data in Image Captioning with Cooperative Distillation0
Efficient CNN-LSTM based Image Captioning using Neural Network CompressionCode0
AutoCaption: Image Captioning with Neural Architecture Search0
Robots Understanding Contextual Information in Human-Centered Environments using Weakly Supervised Mask Data Distillation0
Intrinsic Image Captioning Evaluation0
Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer NetworkCode1
Image Captioning with Context-Aware Auxiliary Guidance0
Towards Annotation-Free Evaluation of Cross-Lingual Image Captioning0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps0
Confidence-aware Non-repetitive Multimodal Transformers for TextCapsCode1
Robust Image Captioning0
Understanding Guided Image Captioning Performance across DomainsCode0
A Framework and Dataset for Abstract Art Generation via CalligraphyGAN0
ODIANLP’s Participation in WAT20200
From “Before” to “After”: Generating Natural Language Instructions from Image Pairs in a Simple Visual Domain0
Bridge the Gap: High-level Semantic Planning for Image Captioning0
Geo-Aware Image Caption Generation0
Image Caption Generation for News ArticlesCode0
Prophet Attention: Predicting Attention with Future Attention0
Language-Driven Region Pointer Advancement for Controllable Image CaptioningCode0
Multimodal Learning for Hateful Memes DetectionCode0
SuperOCR: A Conversion from Optical Character Recognition to Image Captioning0
Structural and Functional Decomposition for Personality Image Captioning in a Communication Game0
CapWAP: Captioning with a Purpose0
Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human GazeCode0
The ApposCorpus: A new multilingual, multi-domain dataset for factual appositive generation0
Attention Beam: An Image Captioning Approach0
Dual Attention on Pyramid Feature Maps for Image Captioning0
Boost Image Captioning with Knowledge Reasoning0
Diverse Image Captioning with Context-Object Split Latent SpacesCode1
ViLBERTScore: Evaluating Image Caption Using Vision-and-Language BERTCode1
CapWAP: Image Captioning with a Purpose0
Fusion Models for Improved Visual Captioning0
Show:102550
← PrevPage 24 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified