SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 150 of 1878 papers

TitleStatusHype
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V TrustworthinessCode11
Chameleon: Mixed-Modal Early-Fusion Foundation ModelsCode7
RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human FeedbackCode6
Versatile Diffusion: Text, Images and Variations All in One Diffusion ModelCode6
PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image GenerationCode5
YOLOR-Based Multi-Task LearningCode5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and GenerationCode5
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
LLM2CLIP: Powerful Language Model Unlocks Richer Visual RepresentationCode4
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language ModelsCode4
A Survey on Vision-Language-Action Models for Embodied AICode4
GPT-4V(ision) is a Generalist Web Agent, if GroundedCode4
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language ModelsCode4
GLIPv2: Unifying Localization and Vision-Language UnderstandingCode4
Falcon: A Remote Sensing Vision-Language Foundation ModelCode3
Temporal Working Memory: Query-Guided Segment Refinement for Enhanced Multimodal UnderstandingCode3
Valley2: Exploring Multimodal Models with Scalable Vision-Language DesignCode3
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth FusionCode3
Remote Sensing Temporal Vision-Language Models: A Comprehensive SurveyCode3
Playground v3: Improving Text-to-Image Alignment with Deep-Fusion Large Language ModelsCode3
View Selection for 3D Captioning via Diffusion RankingCode3
TinyGPT-V: Efficient Multimodal Large Language Model via Small BackbonesCode3
Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal ModelsCode3
Emu: Generative Pretraining in MultimodalityCode3
SVIT: Scaling up Visual Instruction TuningCode3
WikiWeb2M: A Page-Level Multimodal Wikipedia DatasetCode3
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
Vision-Language Pre-training: Basics, Recent Advances, and Future TrendsCode3
All You May Need for VQA are Image CaptionsCode3
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation ModelsCode3
Ludwig: a type-based declarative deep learning toolboxCode3
Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math ReasoningCode2
OmniCaptioner: One Captioner to Rule Them AllCode2
Unified Multimodal Discrete DiffusionCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
Benchmarking Retrieval-Augmented Generation in Multi-Modal ContextsCode2
EvalMuse-40K: A Reliable and Fine-Grained Benchmark with Comprehensive Human Annotations for Text-to-Image Generation Model EvaluationCode2
Frontiers in Intelligent ColonoscopyCode2
TIPS: Text-Image Pretraining with Spatial AwarenessCode2
RAP: Retrieval-Augmented Personalization for Multimodal Large Language ModelsCode2
VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image UnderstandingCode2
Towards Vision-Language Geo-Foundation Model: A SurveyCode2
Yo'LLaVA: Your Personalized Language and Vision AssistantCode2
From Redundancy to Relevance: Information Flow in LVLMs Across Reasoning TasksCode2
Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language ModelsCode2
Benchmarking and Improving Detail Image CaptionCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest SearchCode2
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept MatchingCode2
Show:102550
← PrevPage 1 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified