SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 150 of 1878 papers

TitleStatusHype
Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos0
Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math ReasoningCode2
ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMsCode1
A Novel Lightweight Transformer with Edge-Aware Fusion for Remote Sensing Image Captioning0
An Open-Source Software Toolkit & Benchmark Suite for the Evaluation and Adaptation of Multimodal Action Models0
DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text RetrievalCode1
Edit Flows: Flow Matching with Edit Operations0
Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of EmbeddingsCode0
Better Reasoning with Less Data: Enhancing VLMs Through Unified Modality Scoring0
GTR-CoT: Graph Traversal as Visual Chain of Thought for Molecular Structure RecognitionCode0
Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning0
Stepwise Decomposition and Dual-stream Focus: A Novel Approach for Training-free Camouflaged Object SegmentationCode0
SRD: Reinforcement-Learned Semantic Perturbation for Backdoor Defense in VLMs0
Attention-based transformer models for image captioning across languages: An in-depth survey and evaluation0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
Puzzled by Puzzles: When Vision-Language Models Can't Take a HintCode1
CLDTracker: A Comprehensive Language Description for Visual TrackingCode0
Document-Level Text Generation with Minimum Bayes Risk Decoding using Optimal TransportCode0
Beam-Guided Knowledge Replay for Knowledge-Rich Image Captioning using Vision-Language Model0
Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)Code0
SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable RewardsCode1
TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP0
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
Steering LVLMs via Sparse Autoencoder for Hallucination Mitigation0
Redemption Score: An Evaluation Framework to Rank Image Captions While Redeeming Image Semantics and Language Pragmatics0
SCENIR: Visual Semantic Clarity through Unsupervised Scene Graph RetrievalCode0
MedBLIP: Fine-tuning BLIP for Medical Image Captioning0
NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI0
RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture UnderstandingCode0
Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models0
Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping0
Temporally-Grounded Language Generation: A Benchmark for Real-Time Vision-Language ModelsCode0
Cross-Image Contrastive Decoding: Precise, Lossless Suppression of Language Priors in Large Vision-Language Models0
A Grounded Memory System For Smart Personal Assistants0
Describe Anything in Medical Images0
ArtRAG: Retrieval-Augmented Generation with Structured Context for Visual Art Understanding0
Mitigating Image Captioning Hallucinations in Vision-Language Models0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
Transferable Adversarial Attacks on Black-Box Vision-Language Models0
Zoomer: Adaptive Image Focus Optimization for Black-box MLLM0
MicarVLMoE: A Modern Gated Cross-Aligned Vision-Language Mixture of Experts Model for Medical Image Captioning and Report GenerationCode0
Zero-Shot, But at What Cost? Unveiling the Hidden Overhead of MILS's LLM-CLIP Framework for Image Captioning0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
Generalized Visual Relation Detection with Diffusion Models0
LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation0
TADACap: Time-series Adaptive Domain-Aware Captioning0
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks0
SilVar-Med: A Speech-Driven Visual Language Model for Explainable Abnormality Detection in Medical ImagingCode1
Show:102550
← PrevPage 1 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified