SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 3140 of 188 papers

TitleStatusHype
Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search BenchmarkCode1
Revisiting the Role of Language Priors in Vision-Language ModelsCode1
Improved Probabilistic Image-Text RepresentationsCode1
Are Diffusion Models Vision-And-Language Reasoners?Code1
Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language LearnersCode1
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis EvaluationCode1
Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured RepresentationsCode1
Multimodal Image-Text Matching Improves Retrieval-based Chest X-Ray Report GenerationCode1
Plug-and-Play Regulators for Image-Text MatchingCode1
BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity ConsistencyCode1
Show:102550
← PrevPage 4 of 19Next →

No leaderboard results yet.