SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 7180 of 188 papers

TitleStatusHype
UGNCL: Uncertainty-Guided Noisy Correspondence Learning for Efficient Cross-Modal MatchingCode1
Graph Structured Network for Image-Text MatchingCode1
Cross-modal Active Complementary Learning with Self-refining CorrespondenceCode1
GRIT-VLP: Grouped Mini-batch Sampling for Efficient Vision and Language Pre-trainingCode1
Visual Semantic Reasoning for Image-Text MatchingCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial NetworksCode1
Efficient and Long-Tailed Generalization for Pre-trained Vision-Language ModelCode0
Dual Attention Networks for Multimodal Reasoning and MatchingCode0
Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?Code0
Show:102550
← PrevPage 8 of 19Next →

No leaderboard results yet.