SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 651700 of 1878 papers

TitleStatusHype
Mindstorms in Natural Language-Based Societies of Mind0
HAAV: Hierarchical Aggregation of Augmented Views for Image Captioning0
Exploring Diverse In-Context Configurations for Image CaptioningCode1
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought0
An Examination of the Robustness of Reference-Free Image Captioning Evaluation MetricsCode0
Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language ModelsCode1
Gender Biases in Automatic Evaluation Metrics for Image CaptioningCode0
Exploring Affordance and Situated Meaning in Image Captions: A Multimodal Analysis0
Alt-Text with Context: Improving Accessibility for Images on Twitter0
Text encoders bottleneck compositionality in contrastive vision-language modelsCode1
PIC-XAI: Post-hoc Image Captioning Explanation using SegmentationCode0
MemeCap: A Dataset for Captioning and Interpreting MemesCode1
Text-based Person Search without Parallel Image-Text Data0
What Makes for Good Visual Tokenizers for Large Language Models?Code1
A request for clarity over the End of Sequence token in the Self-Critical Sequence TrainingCode0
DiffCap: Exploring Continuous Diffusion on Image Captioning0
Cross2StrA: Unpaired Cross-lingual Image Captioning with Cross-lingual Cross-modal Structure-pivoted Alignment0
Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense CaptionerCode1
Brain Captioning: Decoding human brain activity into images and textCode1
Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation ModelsCode1
Semantic Composition in Visually Grounded Language Models0
IMAGINATOR: Pre-Trained Image+Text Joint Embeddings using Word-Level Grounding of ImagesCode0
Simple Token-Level Confidence Improves Caption Correctness0
Towards L-System Captioning for Tree Reconstruction0
InfoMetIC: An Informative Metric for Reference-free Image Caption EvaluationCode1
WikiWeb2M: A Page-Level Multimodal Wikipedia DatasetCode3
Vision-Language Models in Remote Sensing: Current Progress and Future TrendsCode1
Exploiting Pseudo Image Captions for Multimodal Summarization0
UIT-OpenViIC: A Novel Benchmark for Evaluating Image Captioning in Vietnamese0
The Role of Data Curation in Image CaptioningCode0
A Suite of Generative Tasks for Multi-Level Multimodal Webpage UnderstandingCode0
Image Captioners Sometimes Tell More Than Images They See0
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
Making the Most of What You Have: Adapting Pre-trained Visual Language Models in the Low-data Regime0
Multimodal Data Augmentation for Image Captioning using Diffusion ModelsCode0
Fairness in AI Systems: Mitigating gender bias from language-vision models0
Transforming Visual Scene Graphs to Image CaptionsCode1
Quality-agnostic Image Captioning to Safely Assist People with Vision Impairment0
Learning Human-Human Interactions in Images from Weak Textual Supervision0
From Association to Generation: Text-only Captioning by Unsupervised Cross-modal MappingCode1
TTIDA: Controllable Generative Data Augmentation via Text-to-Text and Text-to-Image ModelsCode0
VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and DatasetCode2
A-CAP: Anticipation Captioning with Commonsense Knowledge0
Advancing Medical Imaging with Language Models: A Journey from N-grams to ChatGPT0
Boosting Cross-task Transferability of Adversarial Patches with Visual Relations0
ImageCaptioner^2: Image Captioner for Image Captioning Bias Amplification Assessment0
Model-Agnostic Gender Debiased Image CaptioningCode0
Uncurated Image-Text Datasets: Shedding Light on Demographic BiasCode1
Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models0
Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data0
Show:102550
← PrevPage 14 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified