SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 2130 of 188 papers

TitleStatusHype
Negative Pre-aware for Noisy Cross-modal MatchingCode1
Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language UnderstandingCode1
Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language ModelsCode1
MMoE: Enhancing Multimodal Models with Mixtures of Multimodal Interaction ExpertsCode1
Cross-modal Active Complementary Learning with Self-refining CorrespondenceCode1
Prototype-based Aleatoric Uncertainty Quantification for Cross-modal RetrievalCode1
Parameter-Efficient Transfer Learning for Remote Sensing Image-Text RetrievalCode1
Your Negative May not Be True Negative: Boosting Image-Text Matching with False Negative EliminationCode1
Advancing Visual Grounding with Scene Knowledge: Benchmark and MethodCode1
UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language UnderstandingCode1
Show:102550
← PrevPage 3 of 19Next →

No leaderboard results yet.