SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 16511700 of 1878 papers

TitleStatusHype
Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation0
A Dataset and Reranking Method for Multimodal MT of User-Generated Image Captions0
Neural Monkey: The Current State and Beyond0
Attentive Tensor Product Learning0
Zero-Resource Neural Machine Translation with Multi-Agent Communication Game0
Generating Triples with Adversarial Networks for Scene Graph Construction0
Multimodal Image Captioning for Marketing Analysis0
Human Action Adverb Recognition: ADHA Dataset and A Three-Stream Hybrid Model0
Netizen-Style Commenting on Fashion Photos: Dataset and Diversity Measures0
Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions0
Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions0
Describing Semantic Representations of Brain Activity Evoked by Visual Stimuli0
Image Captioning using Deep Neural ArchitecturesCode0
DeepSeek: Content Based Image Search & Retrieval0
Approximate FPGA-based LSTMs under Computation Time Constraints0
Large Scale Multi-Domain Multi-Task Learning with MultiModel0
What is image captioning made of?Code0
Revisiting Bayes by Backprop0
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition0
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image CaptioningCode0
Teaching Machines to Describe Images with Natural Language Feedback0
Deliberation Networks: Sequence Generation Beyond One-Pass Decoding0
Convolutional Image CaptioningCode1
ADVISE: Symbolism and External Knowledge for Decoding Advertisements0
Deep Matching Autoencoders0
Phrase-based Image Captioning with Hierarchical LSTM Model0
Image Captioning and Classification of Dangerous Situations0
Attentive Language Models0
Neural Machine Translation: Basics, Practical Aspects and Recent Trends0
Lexical Simplification with the Deep Structured Similarity Model0
Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework0
Integrating Vision and Language Datasets to Measure Word Concreteness0
Comparing Recurrent and Convolutional Architectures for English-Hindi Neural Machine Translation0
Fraternal DropoutCode0
A Neural-Symbolic Approach to Design of CAPTCHA0
Attention-Based Models for Text-Dependent Speaker VerificationCode0
GeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks0
OSU Multimodal Machine Translation System Report0
Contrastive Learning for Image Captioning0
Aesthetic Critiques Generation for Photos0
Cold-Start Reinforcement Learning with Softmax Policy GradientCode0
Self-Guiding Multimodal LSTM - when we do not have a perfect training dataset for image captioning0
Stack-Captioning: Coarse-to-Fine Learning for Image CaptioningCode0
Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming0
Sheffield MultiMT: Using Object Posterior Predictions for Multimodal Machine Translation0
Generating Image Descriptions using Multilingual Data0
The AFRL-OSU WMT17 Multimodal Translation System: An Image Processing Approach0
CUNI System for the WMT17 Multimodal Translation Task0
Generating Video Descriptions with Topic Guidance0
CNN Fixations: An unraveling approach to visualize the discriminative image regionsCode0
Show:102550
← PrevPage 34 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified