SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 301350 of 1878 papers

TitleStatusHype
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language ModelsCode1
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot PromptingCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation ApproachCode1
DeltaNet:Conditional Medical Report Generation for COVID-19 DiagnosisCode1
MemeCap: A Dataset for Captioning and Interpreting MemesCode1
Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQACode1
Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only TrainingCode1
Dense Relational Image Captioning via Multi-task Triple-Stream NetworksCode1
Dense Relational Captioning: Triple-Stream Networks for Relationship-Based CaptioningCode1
BRIDGE: Bridging Gaps in Image Captioning Evaluation with Stronger Visual CuesCode1
Discovering Non-monotonic Autoregressive Orderings with Variational InferenceCode1
Detecting and Recovering Sequential DeepFake ManipulationCode1
CgT-GAN: CLIP-guided Text GAN for Image CaptioningCode1
A large annotated corpus for learning natural language inferenceCode1
Discovering Autoregressive Orderings with Variational InferenceCode1
DiffX: Guide Your Layout to Cross-Modal Generative ModelingCode1
Diffusion Bridge: Leveraging Diffusion Model to Reduce the Modality Gap Between Text and Vision for Zero-Shot Image CaptioningCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
Brain Captioning: Decoding human brain activity into images and textCode1
ChatEarthNet: A Global-Scale Image-Text Dataset Empowering Vision-Language Geo-Foundation ModelsCode1
ConvNet Architecture Search for Spatiotemporal Feature LearningCode1
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence ModelsCode1
Diverse Image Captioning with Context-Object Split Latent SpacesCode1
Mutual Information Divergence: A Unified Metric for Multimodal Generative ModelsCode1
Myriad: Large Multimodal Model by Applying Vision Experts for Industrial Anomaly DetectionCode1
It is Okay to Not Be Okay: Overcoming Emotional Bias in Affective Image Captioning by Contrastive Data CollectionCode1
Adapting Grad-CAM for Embedding NetworksCode1
LaB-RAG: Label Boosted Retrieval Augmented Generation for Radiology Report GenerationCode1
Dual-branch Hybrid Learning Network for Unbiased Scene Graph GenerationCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
Noise-aware Learning from Web-crawled Image-Text Data for Image CaptioningCode1
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense CaptionsCode1
CLIP-Diffusion-LM: Apply Diffusion Model on Image CaptioningCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
A Survey on Efficient Vision-Language ModelsCode1
CLIPScore: A Reference-free Evaluation Metric for Image CaptioningCode1
CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine TranslationCode1
Bootstrapping Interactive Image-Text Alignment for Remote Sensing Image CaptioningCode1
Boostlet.js: Image processing plugins for the web via JavaScript injectionCode1
Consensus-Aware Visual-Semantic Embedding for Image-Text MatchingCode1
Coarse-to-Fine Vision-Language Pre-training with Fusion in the BackboneCode1
COBRA: Contrastive Bi-Modal Representation AlgorithmCode1
Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense CaptionerCode1
CoCa: Contrastive Captioners are Image-Text Foundation ModelsCode1
Paying Attention to Descriptions Generated by Image Captioning ModelsCode1
InfoMetIC: An Informative Metric for Reference-free Image Caption EvaluationCode1
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial TrajectoryCode1
Confidence-aware Non-repetitive Multimodal Transformers for TextCapsCode1
Improving Image Captioning with Better Use of CaptionsCode1
Show:102550
← PrevPage 7 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IBM Research AICIDEr80.67Unverified
2CASIA_IVACIDEr79.15Unverified
3feixiangCIDEr77.31Unverified
4wocaoCIDEr77.21Unverified
5lamiwab172CIDEr75.93Unverified
6RUC_AIM3CIDEr73.52Unverified
7funasCIDEr73.51Unverified
8SRC-B_VCLabCIDEr73.47Unverified
9spartaCIDEr73.41Unverified
10x-vizCIDEr73.26Unverified
#ModelMetricClaimedVerifiedStatus
1VALORCIDER152.5Unverified
2VASTCIDER149Unverified
3Virtex (ResNet-101)CIDER94Unverified
4KOSMOS-1 (1.6B) (zero-shot)CIDER84.7Unverified
5BLIP-FuseCapCLIPScore78.5Unverified
6mPLUGBLEU-446.5Unverified
7OFABLEU-444.9Unverified
8GITBLEU-444.1Unverified
9BLIP-2 ViT-G OPT 2.7B (zero-shot)BLEU-443.7Unverified
10BLIP-2 ViT-G OPT 6.7B (zero-shot)BLEU-443.5Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr149.1Unverified
2GIT2, Single ModelCIDEr124.18Unverified
3GIT, Single ModelCIDEr122.4Unverified
4PaLICIDEr121.09Unverified
5CoCa - Google BrainCIDEr117.9Unverified
6Microsoft Cognitive Services teamCIDEr112.82Unverified
7Single ModelCIDEr108.98Unverified
8GRIT (zero-shot, no VL pretraining, no CBS)CIDEr105.9Unverified
9FudanFVLCIDEr104.9Unverified
10FudanWYZCIDEr104.25Unverified
#ModelMetricClaimedVerifiedStatus
1GIT2, Single ModelCIDEr125.51Unverified
2PaLICIDEr124.35Unverified
3GIT, Single ModelCIDEr123.92Unverified
4CoCa - Google BrainCIDEr120.73Unverified
5Microsoft Cognitive Services teamCIDEr115.54Unverified
6Single ModelCIDEr110.76Unverified
7FudanFVLCIDEr109.33Unverified
8FudanWYZCIDEr108.04Unverified
9IEDA-LABCIDEr100.15Unverified
10firetheholeCIDEr99.51Unverified
#ModelMetricClaimedVerifiedStatus
1PaLICIDEr126.67Unverified
2GIT2, Single ModelCIDEr122.27Unverified
3GIT, Single ModelCIDEr122.04Unverified
4CoCa - Google BrainCIDEr121.69Unverified
5Microsoft Cognitive Services teamCIDEr110.14Unverified
6Single ModelCIDEr109.49Unverified
7FudanFVLCIDEr106.55Unverified
8FudanWYZCIDEr103.75Unverified
9HumanCIDEr91.62Unverified
10firetheholeCIDEr88.54Unverified