SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 14011450 of 1878 papers

TitleStatusHype
WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models0
XGPT: Cross-modal Generative Pre-Training for Image Captioning0
Zero-Resource Neural Machine Translation with Multi-Agent Communication Game0
Zero-Shot, But at What Cost? Unveiling the Hidden Overhead of MILS's LLM-CLIP Framework for Image Captioning0
Zero-shot Image Captioning by Anchor-augmented Vision-Language Space Alignment0
Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning0
0/1 Deep Neural Networks via Block Coordinate Descent0
Learning to Disambiguate by Asking Discriminative Questions0
Learning to generalize to new compositions in image understanding0
Learning to Guide Decoding for Image Captioning0
Learning to Relate from Captions and Bounding Boxes0
Learning to Select: A Fully Attentive Approach for Novel Object Captioning0
Learning Visual-Linguistic Adequacy, Fidelity, and Fluency for Novel Object Captioning0
Learning Visual Representations with Caption Annotations0
Learning Word Embeddings for Low-Resource Languages by PU Learning0
Let's Go Shopping (LGS) -- Web-Scale Image-Text Dataset for Visual Concept Understanding0
"Let's not Quote out of Context": Unified Vision-Language Pretraining for Context Assisted Image Captioning0
Leveraging Partial Dependency Trees to Control Image Captions0
Leveraging Sentence Similarity in Natural Language Generation: Improving Beam Search using Range Voting0
Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-Modal Knowledge Transfer0
Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer0
Lexical Simplification with the Deep Structured Similarity Model0
LG-VQ: Language-Guided Codebook Learning0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
Lightweight In-Context Tuning for Multimodal Unified Models0
Linguistically-aware Attention for Reducing the Semantic-Gap in Vision-Language Tasks0
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)0
LLaMA-Excitor: General Instruction Tuning via Indirect Feature Interaction0
LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning0
LLM4VG: Large Language Models Evaluation for Video Grounding0
LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks0
LocCa: Visual Pretraining with Location-aware Captioners0
Longer Version for "Deep Context-Encoding Network for Retinal Image Captioning"0
Long-Tail Classification for Distinctive Image Captioning: A Simple yet Effective Remedy for Side Effects of Reinforcement Learning0
Look Back and Predict Forward in Image Captioning0
Look Deeper See Richer: Depth-aware Image Paragraph Captioning0
LookupViT: Compressing visual information to a limited number of tokens0
Lost in Translation: When GPT-4V(ision) Can't See Eye to Eye with Text. A Vision-Language-Consistency Analysis of VLLMs and Beyond0
LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation0
Lyrics: Boosting Fine-grained Language-Vision Alignment and Comprehension via Semantic-aware Visual Objects0
M3D-GAN: Multi-Modal Multi-Domain Translation with Universal Attention0
Macroscopic Control of Text Generation for Image Captioning0
MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning0
MAGNet: Multi-Region Attention-Assisted Grounding of Natural Language Queries at Phrase Level0
Making the Most of What You Have: Adapting Pre-trained Visual Language Models in the Low-data Regime0
Making Use of Latent Space in Language GANs for Generating Diverse Text without Pre-training0
MAMI: Multi-Attentional Mutual-Information for Long Sequence Neuron Captioning0
Mapping Images to Sentiment Adjective Noun Pairs with Factorized Neural Nets0
Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval0
Mask-free OVIS: Open-Vocabulary Instance Segmentation without Manual Mask Annotations0
Show:102550
← PrevPage 29 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified