SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 101150 of 1878 papers

TitleStatusHype
Weakly Supervised Video Scene Graph Generation via Natural Language SupervisionCode1
GAIA: A Global, Multi-modal, Multi-scale Vision-Language Dataset for Remote Sensing Image AnalysisCode1
Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language ModelsCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
LAVCap: LLM-based Audio-Visual Captioning using Optimal TransportCode1
RadAlign: Advancing Radiology Report Generation with Vision-Language Concept AlignmentCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
Diffusion Bridge: Leveraging Diffusion Model to Reduce the Modality Gap Between Text and Vision for Zero-Shot Image CaptioningCode1
Typhoon 2: A Family of Open Text and Multimodal Thai Large Language ModelsCode1
G-VEval: A Versatile Metric for Evaluating Image and Video Captions Using GPT-4oCode1
MedMax: Mixed-Modal Instruction Tuning for Training Biomedical AssistantsCode1
Benchmarking Large Vision-Language Models via Directed Scene Graph for Comprehensive Image CaptioningCode1
LaB-RAG: Label Boosted Retrieval Augmented Generation for Radiology Report GenerationCode1
FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing Interpretability in Chest X-Ray Report GenerationCode1
LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Learned Image CompressionCode1
Nearest Neighbor Normalization Improves Multimodal RetrievalCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot CaptioningCode1
Instruction-guided Multi-Granularity Segmentation and Captioning with Large Multimodal ModelCode1
YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language ModelsCode1
LIME: Less Is More for MLLM EvaluationCode1
MultiMath: Bridging Visual and Mathematical Reasoning for Large Language ModelsCode1
See or Guess: Counterfactually Regularized Image CaptioningCode1
Revisiting Image Captioning Training Paradigm via Direct CLIP-based OptimizationCode1
BRIDGE: Bridging Gaps in Image Captioning Evaluation with Stronger Visual CuesCode1
DiffX: Guide Your Layout to Cross-Modal Generative ModelingCode1
AVCap: Leveraging Audio-Visual Features as Text Tokens for CaptioningCode1
Pseudo-RIS: Distinctive Pseudo-supervision Generation for Referring Image SegmentationCode1
MM-Instruct: Generated Visual Instructions for Large Multimodal Model AlignmentCode1
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
ImageNet3D: Towards General-Purpose Object-Level 3D UnderstandingCode1
FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal ModelCode1
RTGen: Generating Region-Text Pairs for Open-Vocabulary Object DetectionCode1
UniRAG: Universal Retrieval Augmentation for Large Vision Language ModelsCode1
Boostlet.js: Image processing plugins for the web via JavaScript injectionCode1
LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation?Code1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
Harnessing the Power of Large Vision Language Models for Synthetic Image DetectionCode1
Bi-LORA: A Vision-Language Approach for Synthetic Image DetectionCode1
Disentangled Pre-training for Human-Object Interaction DetectionCode1
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial TrajectoryCode1
Can We Talk Models Into Seeing the World Differently?Code1
Differentially Private Representation Learning via Image CaptioningCode1
Polos: Multimodal Metric Learning from Human Feedback for Image CaptioningCode1
Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP Guided Reinforcement LearningCode1
ChatEarthNet: A Global-Scale Image-Text Dataset Empowering Vision-Language Geo-Foundation ModelsCode1
Text-Guided Image ClusteringCode1
SciMMIR: Benchmarking Scientific Multi-modal Information RetrievalCode1
Veagle: Advancements in Multimodal Representation LearningCode1
Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only TrainingCode1
Show:102550
← PrevPage 3 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified