SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 151160 of 188 papers

TitleStatusHype
Multi-Modal Representation Learning with Text-Driven Soft Masks0
MURAL: Multimodal, Multitask Representations Across Languages0
MURAL: Multimodal, Multitask Retrieval Across Languages0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Object-centric Binding in Contrastive Language-Image Pretraining0
OT-Attack: Enhancing Adversarial Transferability of Vision-Language Models via Optimal Transport Optimization0
ParNet: Position-aware Aggregated Relation Network for Image-Text matching0
Probing the Role of Positional Information in Vision-Language Models0
Probing the Role of Positional Information in Vision-Language Models0
Refined Vision-Language Modeling for Fine-grained Multi-modal Pre-training0
Show:102550
← PrevPage 16 of 19Next →

No leaderboard results yet.