SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 150 of 188 papers

TitleStatusHype
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and GenerationCode5
Aligning Information Capacity Between Vision and Language via Dense-to-Sparse Feature Distillation for Image-Text MatchingCode2
FiLo++: Zero-/Few-Shot Anomaly Detection by Fused Fine-Grained Descriptions and Deformable LocalizationCode2
MouSi: Poly-Visual-Expert Vision-Language ModelsCode2
A Systematic Survey of Prompt Engineering on Vision-Language Foundation ModelsCode2
Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person RetrievalCode2
Language Models Can See: Plugging Visual Controls in Text GenerationCode2
VinVL: Revisiting Visual Representations in Vision-Language ModelsCode2
Oscar: Object-Semantics Aligned Pre-training for Vision-Language TasksCode2
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Primary Word EmphasisCode1
CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object RepresentationCode1
ReCon: Enhancing True Correspondence Discrimination through Relation Consistency for Robust Noisy Correspondence LearningCode1
Image-text matching for large-scale book collectionsCode1
UGNCL: Uncertainty-Guided Noisy Correspondence Learning for Efficient Cross-Modal MatchingCode1
Composing Object Relations and Attributes for Image-Text MatchingCode1
Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text MatchingCode1
RadCLIP: Enhancing Radiologic Image Analysis through Contrastive Language-Image Pre-trainingCode1
ColorSwap: A Color and Word Order Dataset for Multimodal EvaluationCode1
Negative Pre-aware for Noisy Cross-modal MatchingCode1
Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language UnderstandingCode1
Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language ModelsCode1
MMoE: Enhancing Multimodal Models with Mixtures of Multimodal Interaction ExpertsCode1
Cross-modal Active Complementary Learning with Self-refining CorrespondenceCode1
Prototype-based Aleatoric Uncertainty Quantification for Cross-modal RetrievalCode1
Parameter-Efficient Transfer Learning for Remote Sensing Image-Text RetrievalCode1
Your Negative May not Be True Negative: Boosting Image-Text Matching with False Negative EliminationCode1
Advancing Visual Grounding with Scene Knowledge: Benchmark and MethodCode1
UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language UnderstandingCode1
Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search BenchmarkCode1
Revisiting the Role of Language Priors in Vision-Language ModelsCode1
Improved Probabilistic Image-Text RepresentationsCode1
Are Diffusion Models Vision-And-Language Reasoners?Code1
Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language LearnersCode1
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis EvaluationCode1
Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured RepresentationsCode1
Multimodal Image-Text Matching Improves Retrieval-based Chest X-Ray Report GenerationCode1
Plug-and-Play Regulators for Image-Text MatchingCode1
BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity ConsistencyCode1
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP for Generic Natural Visual Stimulus DecodingCode1
Fine-Grained Image-Text Matching by Cross-Modal Hard Aligning NetworkCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal RetrievalCode1
ComCLIP: Training-Free Compositional Image and Text MatchingCode1
Self-supervised vision-language pretraining for Medical visual question answeringCode1
MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training ModelCode1
GRIT-VLP: Grouped Mini-batch Sampling for Efficient Vision and Language Pre-trainingCode1
Zero-Shot Video Captioning with Evolving Pseudo-TokensCode1
Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge TransferCode1
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.