SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 101150 of 1878 papers

TitleStatusHype
FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing Interpretability in Chest X-Ray Report GenerationCode1
ChatEarthNet: A Global-Scale Image-Text Dataset Empowering Vision-Language Geo-Foundation ModelsCode1
Chart-to-Text: A Large-Scale Benchmark for Chart SummarizationCode1
CLIP-Diffusion-LM: Apply Diffusion Model on Image CaptioningCode1
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
Evolving Deep Neural NetworksCode1
Evaluating Multimodal Representations on Visual Semantic Textual SimilarityCode1
Exchanging-based Multimodal Fusion with TransformerCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
Can images help recognize entities? A study of the role of images for Multimodal NERCode1
Exploiting Multiple Sequence Lengths in Fast End to End Training for Image CaptioningCode1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense CaptionerCode1
A neural attention model for speech command recognitionCode1
ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language GenerationCode1
CIDEr: Consensus-based Image Description EvaluationCode1
Can Audio Captions Be Evaluated with Image Caption Metrics?Code1
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion TasksCode1
CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource LanguagesCode1
CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine TranslationCode1
Instruction-guided Multi-Granularity Segmentation and Captioning with Large Multimodal ModelCode1
Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrintsCode1
CNN+CNN: Convolutional Decoders for Image CaptioningCode1
Coarse-to-Fine Vision-Language Pre-training with Fusion in the BackboneCode1
Exploring Discrete Diffusion Models for Image CaptioningCode1
FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal ModelCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
Analysis of diversity-accuracy tradeoff in image captioningCode1
CaMEL: Mean Teacher Learning for Image CaptioningCode1
Learning to Generate Grounded Visual Captions without Localization SupervisionCode1
Analog Bits: Generating Discrete Data using Diffusion Models with Self-ConditioningCode1
End-to-End Supermask Pruning: Learning to Prune Image Captioning ModelsCode1
Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation ModelsCode1
BRIDGE: Bridging Gaps in Image Captioning Evaluation with Stronger Visual CuesCode1
EDSL: An Encoder-Decoder Architecture with Symbol-Level Features for Printed Mathematical Expression RecognitionCode1
Brain Captioning: Decoding human brain activity into images and textCode1
CgT-GAN: CLIP-guided Text GAN for Image CaptioningCode1
Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning modelsCode1
End-to-End Transformer Based Model for Image CaptioningCode1
Exploring Diverse In-Context Configurations for Image CaptioningCode1
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence ModelsCode1
Diverse Image Captioning with Context-Object Split Latent SpacesCode1
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial TrajectoryCode1
Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree SearchCode1
Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP Guided Reinforcement LearningCode1
Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart CaptioningCode1
DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text RetrievalCode1
Bi-LORA: A Vision-Language Approach for Synthetic Image DetectionCode1
Boostlet.js: Image processing plugins for the web via JavaScript injectionCode1
Disentangled Pre-training for Human-Object Interaction DetectionCode1
Show:102550
← PrevPage 3 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified