SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 15011550 of 1878 papers

TitleStatusHype
Attend More Times for Image Captioning0
An Attempt towards Interpretable Audio-Visual Video Captioning0
Auto-Encoding Scene Graphs for Image CaptioningCode0
Learning to Caption Images through a Lifetime by Asking QuestionsCode0
Towards Task Understanding in Visual Settings0
Unsupervised Image CaptioningCode0
Show, Control and Tell: A Framework for Generating Controllable and Grounded CaptionsCode0
A Novel Technique for Evidence based Conditional Inference in Deep Neural Networks via Latent Feature Perturbation0
Senti-Attend: Image Captioning using Sentiment and Attention0
An Interpretable Model for Scene Graph Generation0
Intention Oriented Image Captions with Guiding Objects0
Image Captioning Based on a Hierarchical Attention Mechanism and Policy Gradient Optimization0
AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms0
Generating Description for Sequential Images with Local-Object Attention Conditioned on Global Semantic Context0
Decoding Strategies for Neural Referring Expression Generation0
Treat the system like a human student: Automatic naturalness evaluation of generated text without reference texts0
The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description0
Importance of Self-Attention for Sentiment Analysis0
End-to-end Image Captioning Exploits Distributional Similarity in Multimodal SpaceCode0
A sequential guiding network with attention for image captioning0
Gated Hierarchical Attention for Image CaptioningCode1
Engaging Image Captioning Via Personality0
Area AttentionCode0
A Neural Compositional Paradigm for Image CaptioningCode0
Look Deeper See Richer: Depth-aware Image Paragraph Captioning0
UMONS Submission for WMT18 Multimodal Translation TaskCode0
Bringing back simplicity and lightliness into neural image captioning0
Quantifying the amount of visual information used by neural caption generatorsCode0
Image Captioning as Neural Machine Translation Task in SOCKEYE0
A Comprehensive Survey of Deep Learning for Image CaptioningCode0
Image-to-Video Person Re-Identification by Reusing Cross-modal Embeddings0
Input Combination Strategies for Multi-Source Transformer Decoder0
EmojiGAN: learning emojis distributions with a generative model0
Surprisingly Easy Hard-Attention for Sequence to Sequence LearningCode0
Disambiguated skip-gram model0
CaLcs: Continuously Approximating Longest Common Subsequence for Sequence Level Optimization0
Training for Diversity in Image Paragraph CaptioningCode0
Multimodal Differential Network for Visual Question Generation0
Grounding Semantic Roles in Images0
GraphSeq2Seq: Graph-Sequence-to-Sequence for Neural Machine Translation0
Differentiable Expected BLEU for Text Generation0
Semantically Invariant Text-to-Image Generation0
Vector Learning for Cross Domain Representations0
Batch-normalized Recurrent Highway NetworksCode0
Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language GenerationCode0
A Neural Compositional Paradigm for Image Captioning0
Textually Enriched Neural Module Networks for Visual Question Answering0
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure0
Exploring Visual Relationship for Image Captioning0
Improving Reinforcement Learning Based Image Captioning with Natural Language PriorCode0
Show:102550
← PrevPage 31 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified