SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 18011850 of 1878 papers

TitleStatusHype
Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero TrainingCode0
Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent SpaceCode0
Unifying Text, Tables, and Images for Multimodal Question AnsweringCode0
Fraternal DropoutCode0
Unrestricted Adversarial Examples via Semantic ManipulationCode0
Fluency-Guided Cross-Lingual Image CaptioningCode0
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated LearningCode0
Fine-Grained Image Captioning with Global-Local Discriminative ObjectiveCode0
#PraCegoVer: A Large Dataset for Image Captioning in PortugueseCode0
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive LearningCode0
Pragmatic Issue-Sensitive Image CaptioningCode0
Precision or Recall? An Analysis of Image Captions for Training Text-to-Image Generation ModelCode0
A Benchmark for Multi-Lingual Vision-Language Learning in Remote Sensing Image CaptioningCode0
Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in VideoCode0
Beyond Human Data: Aligning Multimodal Large Language Models by Iterative Self-EvolutionCode0
Annotation Order Matters: Recurrent Image Annotator for Arbitrary Length Image TaggingCode0
Technical Report of NICE Challenge at CVPR 2024: Caption Re-ranking Evaluation Using Ensembled CLIP and Consensus ScoresCode0
Pretrained Image-Text Models are Secretly Video CaptionersCode0
Finding beans in burgers: Deep semantic-visual embedding with localizationCode0
Visually-Aware Context Modeling for News Image CaptioningCode0
"Wikily" Supervised Neural Translation Tailored to Cross-Lingual TasksCode0
Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language GenerationCode0
An Eye for an Ear: Zero-shot Audio Description Leveraging an Image Captioner using Audiovisual Distribution AlignmentCode0
Face-Cap: Image Captioning using Facial Expression AnalysisCode0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
Context-Aware Visual Policy Network for Sequence-Level Image CaptioningCode0
Expressing Visual Relationships via LanguageCode0
Context-aware Captions from Context-agnostic SupervisionCode0
Visual Question Answering: which investigated applications?Code0
TexLiDAR: Automated Text Understanding for Panoramic LiDAR DataCode0
ContCap: A scalable framework for continual image captioningCode0
Protecting Intellectual Property of Language Generation APIs with Lexical WatermarkCode0
Exploring the Synergy Between Vision-Language Pretraining and ChatGPT for Artwork Captioning: A Preliminary StudyCode0
PR Product: A Substitute for Inner Product in Neural NetworksCode0
Exploring Nearest Neighbor Approaches for Image CaptioningCode0
Aesthetic Attributes Assessment of ImagesCode0
Visual Semantic Relatedness Dataset for Image CaptioningCode0
Exploring Multi-Grained Concept Annotations for Multimodal Large Language ModelsCode0
An Examination of the Robustness of Reference-Free Image Captioning Evaluation MetricsCode0
Quality Estimation for Image Captions Based on Large-scale Human EvaluationsCode0
Exploring Annotation-free Image Captioning with Retrieval-augmented Pseudo Sentence GenerationCode0
Quantifying the amount of visual information used by neural caption generatorsCode0
Unsupervised Image CaptioningCode0
Quantifying the visual concreteness of words and topics in multimodal datasetsCode0
Explicit Sparse Transformer: Concentrated Attention Through Explicit SelectionCode0
Experimenting with Self-Supervision using Rotation Prediction for Image CaptioningCode0
Exploring the sequence length bottleneck in the Transformer for Image CaptioningCode0
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel ImagesCode0
Text-driven Adaptation of Foundation Models for Few-shot Surgical Workflow AnalysisCode0
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent VariablesCode0
Show:102550
← PrevPage 37 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified