SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 701750 of 1878 papers

TitleStatusHype
Im2Text: Describing Images Using 1 Million Captioned Photographs0
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models0
Flowing from Words to Pixels: A Framework for Cross-Modality Evolution0
Flowing from Words to Pixels: A Noise-Free Framework for Cross-Modality Evolution0
A Survey of Vision-Language Pre-training from the Lens of Multimodal Machine Translation0
Fluent and Accurate Image Captioning with a Self-Trained Reward Model0
Focused Evaluation for Image Description with Binary Forced-Choice Tasks0
Focus! Relevant and Sufficient Context Selection for News Image Captioning0
FODA-PG for Enhanced Medical Imaging Narrative Generation: Adaptive Differentiation of Normal and Abnormal Attributes0
Foundation Models for Remote Sensing: An Analysis of MLLMs for Object Localization0
Attend More Times for Image Captioning0
FaceGemma: Enhancing Image Captioning with Facial Attributes for Portrait Images0
Extended Self-Critical Pipeline for Transforming Videos to Text (TRECVID-VTT Task 2021) -- Team: MMCUniAugsburg0
From Captions to Rewards (CAREVL): Leveraging Large Language Model Experts for Enhanced Reward Modeling in Large Vision-Language Models0
Comparative study of Transformer and LSTM Network with attention mechanism on Image Captioning0
How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey0
From Pixels to Prose: A Large Dataset of Dense Image Captions0
Aligning Large Multimodal Models with Factually Augmented RLHF0
From Show to Tell: A Survey on Deep Learning-based Image Captioning0
Comparing Recurrent and Convolutional Architectures for English-Hindi Neural Machine Translation0
How to Bridge the Gap between Modalities: Survey on Multimodal Large Language Model0
Exposing and Correcting the Gender Bias in Image Captioning Datasets and Models0
FullAnno: A Data Engine for Enhancing Image Comprehension of MLLMs0
Exploring Visual Relationship for Image Captioning0
Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain0
Fusion Models for Improved Visual Captioning0
A Survey of Evaluation Metrics Used for NLG Systems0
How Vision Affects Language: Comparing Masked Self-Attention in Uni-Modal and Multi-Modal Transformer0
Human Action Adverb Recognition: ADHA Dataset and A Three-Stream Hybrid Model0
GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning0
Exploring Visual Culture Awareness in GPT-4V: A Comprehensive Probing0
AstroLLaVA: towards the unification of astronomical data and natural language0
Generalized Visual Relation Detection with Diffusion Models0
3D Spatial Understanding in MLLMs: Disambiguation and Evaluation0
Exploring Affordance and Situated Meaning in Image Captions: A Multimodal Analysis0
Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models0
Generating Description for Sequential Images with Local-Object Attention Conditioned on Global Semantic Context0
Generating Diverse and Descriptive Image Captions Using Visual Paraphrases0
CLAMP: Contrastive LAnguage Model Prompt-tuning0
Generating Diverse and Informative Natural Language Fashion Feedback0
HOW IMPORTANT ARE NETWORK WEIGHTS? TO WHAT EXTENT DO THEY NEED AN UPDATE?0
Generating image captions with external encyclopedic knowledge0
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions0
Attention Strategies for Multi-Source Sequence-to-Sequence Learning0
Exploring Spatial Language Grounding Through Referring Expressions0
Generating Natural Language Descriptions for Semantic Representations of Human Brain Activity0
Generating Triples with Adversarial Networks for Scene Graph Construction0
Generating Video Descriptions with Topic Guidance0
Connecting Language and Vision to Actions0
Exploring Semantic Relationships for Unpaired Image Captioning0
Show:102550
← PrevPage 15 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified