SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 16011650 of 1878 papers

TitleStatusHype
GroupCap: Group-Based Image Captioning With Structured Relevance and Diversity Constraints0
Categorizing Concepts With Basic Level for Vision-to-Language0
Interpretable Video Captioning via Trajectory Structured Localization0
Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models0
Telling Stories with Soundtracks: An Empirical Analysis of Music in Film0
Generative Bridging Network for Neural Sequence Prediction0
Generating Image Captions in Arabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks0
Visually Guided Spatial Relation Extraction from Text0
Learning Word Embeddings for Low-Resource Languages by PU Learning0
Dialog Generation Using Multi-Turn Reasoning Neural Networks0
How Time Matters: Learning Time-Decay Attention for Contextual Spoken Language Understanding in DialoguesCode0
Fast, Diverse and Accurate Image Captioning Guided By Part-of-Speech0
Neural Joking Machine : Humorous image captioning0
Grow and Prune Compact, Fast, and Accurate LSTMs0
CNN+CNN: Convolutional Decoders for Image CaptioningCode1
Joint Image Captioning and Question Answering0
Turbo Learning for Captionbot and Drawingbot0
Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree SearchCode1
SemStyle: Learning to Generate Stylised Image Captions using Unaligned TextCode0
Improving Image Captioning with Conditional Generative Adversarial NetsCode0
Defoiling Foiled Image CaptionsCode0
Token-level and sequence-level loss smoothing for RNN language modelsCode0
Image CaptioningCode0
A vision-grounded dataset for predicting typical locations for verbs0
Incorporating Semantic Attention in Video Description Generation0
Edit me: A Corpus and a Framework for Understanding Natural Language Image Editing0
Annotating Modality Expressions and Event Factuality for a Japanese Chess Commentary Corpus0
Augmenting Image Question Answering Dataset by Exploiting Image Captions0
Visual Choice of Plausible Alternatives: An Evaluation of Image-based Commonsense Causal ReasoningCode0
Neural Caption Generation for News Images0
Adversarial Semantic Alignment for Improved Image Captions0
No Metrics Are Perfect: Adversarial Reward Learning for Visual StorytellingCode0
Object Counts! Bringing Explicit Detections Back into Image Captioning0
Entity-aware Image Caption Generation0
Quantifying the visual concreteness of words and topics in multimodal datasetsCode0
Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style TransferCode0
Pragmatically Informative Image Captioning with Character-Level Inference0
Decoupled Novel Object CaptionerCode0
Discovery and usage of joint attention in images0
Natural Language Statistical Features of LSTM-generated Texts0
Finding beans in burgers: Deep semantic-visual embedding with localizationCode0
Learning to Guide Decoding for Image Captioning0
Guide Me: Interacting with Deep Networks0
Regularizing RNNs for Caption Generation by Reconstructing The Past with The PresentCode0
Two can play this Game: Visual Dialog with Discriminative Question Generation and Answering0
Neural Baby TalkCode0
Women also Snowboard: Overcoming Bias in Captioning ModelsCode1
Show, Tell and Discriminate: Image Captioning by Self-retrieval with Partially Labeled Data0
Unpaired Image Captioning by Language Pivoting0
Discriminability objective for training descriptive captionsCode0
Show:102550
← PrevPage 33 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified