SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 17011750 of 1878 papers

TitleStatusHype
Cold Fusion: Training Seq2Seq Models Together with Language Models0
Incorporating Copying Mechanism in Image Captioning for Learning Novel Objects0
ConvNet Architecture Search for Spatiotemporal Feature LearningCode1
Fluency-Guided Cross-Lingual Image CaptioningCode0
Learning to Disambiguate by Asking Discriminative Questions0
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?Code0
UdL at SemEval-2017 Task 1: Semantic Textual Similarity Estimation of English Sentence Pairs Using Regression Model over Pairwise FeaturesCode0
MITRE at SemEval-2017 Task 1: Simple Semantic Similarity0
Deep Interactive Region Segmentation and Captioning0
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts0
Order-Free RNN with Visual Attention for Multi-Label Classification0
CUNI System for the WMT17 Multimodal Translation Task0
Where to Play: Retrieval of Video Segments using Natural-Language Queries0
Learning Object Interactions and Descriptions for Semantic Image Segmentation0
Neural Scene De-Rendering0
Multimodal Machine Learning: Integrating Language, Vision and Speech0
Abstractive Document Summarization with a Graph-Based Attentional Neural Model0
Attention Strategies for Multi-Source Sequence-to-Sequence Learning0
Automated Audio Captioning with Recurrent Neural Networks0
Actor-Critic Sequence Training for Image Captioning0
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention0
One Model To Learn Them All0
Image Captioning with Object Detection and Localization0
Teaching Machines to Describe Images via Natural Language Feedback0
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning0
Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML0
Learning Hard Alignments with Variational Inference0
CHAM: action recognition using convolutional hierarchical attention model0
STAIR Captions: Constructing a Large-Scale Japanese Image Caption DatasetCode0
Show, Adapt and Tell: Adversarial Training of Cross-domain Image CaptionerCode0
Paying Attention to Descriptions Generated by Image Captioning ModelsCode1
Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition0
Attend to You: Personalized Image Captioning with Context Sequence Memory NetworksCode0
Neural Extractive Summarization with Side InformationCode0
Deep Reinforcement Learning-based Image Captioning with Embedding Reward0
Bayesian Recurrent Neural NetworksCode1
The BreakingNews Dataset0
Continuous multilinguality with language vectors0
Speaking the Same Language: Matching Machine to Human Captions by Adversarial TrainingCode1
I2T2I: Learning Text to Image Synthesis with Textual Data Augmentation0
Recurrent Models for Situation Recognition0
Towards Diverse and Natural Image Descriptions via a Conditional GANCode0
Evolving Deep Neural NetworksCode1
MIML-FCN+: Multi-instance Multi-label Learning via Fully Convolutional Networks with Privileged Information0
ViP-CNN: Visual Phrase Guided Convolutional Neural Network0
MAT: A Multimodal Attentive Translator for Image Captioning0
Deep Network Guided Proof Search0
Context-aware Captions from Context-agnostic SupervisionCode0
Vision and Language Integration: Moving beyond Objects0
Show:102550
← PrevPage 35 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified