SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 151200 of 1878 papers

TitleStatusHype
Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart CaptioningCode1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and GenerationCode1
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense CaptionsCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology VideosCode1
Mitigating Open-Vocabulary Caption HallucinationsCode1
Bootstrapping Interactive Image-Text Alignment for Remote Sensing Image CaptioningCode1
Contrastive Vision-Language Alignment Makes Efficient Instruction LearnerCode1
Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language ModelsCode1
Zero-shot audio captioning with audio-language model guidance and audio context keywordsCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
NeuSyRE: Neuro-Symbolic Visual Understanding and Reasoning Framework based on Scene Graph EnrichmentCode1
Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image CaptioningCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
Myriad: Large Multimodal Model by Applying Vision Experts for Industrial Anomaly DetectionCode1
CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource LanguagesCode1
Visual Grounding Helps Learn Word Meanings in Low-Data RegimesCode1
Sieve: Multimodal Dataset Pruning Using Image Captioning ModelsCode1
Beyond Generation: Harnessing Text to Image Models for Object Detection and SegmentationCode1
Exchanging-based Multimodal Fusion with TransformerCode1
CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine TranslationCode1
MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual CaptioningCode1
VIGC: Visual Instruction Generation and CorrectionCode1
With a Little Help from your own Past: Prototypical Memory Networks for Image CaptioningCode1
CgT-GAN: CLIP-guided Text GAN for Image CaptioningCode1
VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity ControlCode1
Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme DetectionCode1
GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and TextCode1
Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training ModelCode1
Transferable Decoding with Visual Entities for Zero-Shot Image CaptioningCode1
RSGPT: A Remote Sensing Vision Language Model and BenchmarkCode1
Image Captions are Natural Prompts for Text-to-Image ModelsCode1
Linear Alignment of Vision-language Models for Image CaptioningCode1
Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrintsCode1
Palm: Predicting Actions through Language Models @ Ego4D Long-Term Action Anticipation Challenge 2023Code1
What Makes ImageNet Look Unlike LAIONCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewardsCode1
Understanding and Mitigating Copying in Diffusion ModelsCode1
Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language ModelsCode1
FuseCap: Leveraging Large Language Models for Enriched Fused Image CaptionsCode1
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language TransformersCode1
FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph ParsingCode1
Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language ModelsCode1
Exploring Diverse In-Context Configurations for Image CaptioningCode1
Text encoders bottleneck compositionality in contrastive vision-language modelsCode1
MemeCap: A Dataset for Captioning and Interpreting MemesCode1
What Makes for Good Visual Tokenizers for Large Language Models?Code1
Show:102550
← PrevPage 4 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified