SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 151188 of 188 papers

TitleStatusHype
Multi-Modal Representation Learning with Text-Driven Soft Masks0
MURAL: Multimodal, Multitask Representations Across Languages0
MURAL: Multimodal, Multitask Retrieval Across Languages0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Object-centric Binding in Contrastive Language-Image Pretraining0
OT-Attack: Enhancing Adversarial Transferability of Vision-Language Models via Optimal Transport Optimization0
ParNet: Position-aware Aggregated Relation Network for Image-Text matching0
Probing the Role of Positional Information in Vision-Language Models0
Probing the Role of Positional Information in Vision-Language Models0
Refined Vision-Language Modeling for Fine-grained Multi-modal Pre-training0
RETTA: Retrieval-Enhanced Test-Time Adaptation for Zero-Shot Video Captioning0
Scene Text Recognition with Image-Text Matching-guided Dictionary0
Selectively Hard Negative Mining for Alleviating Gradient Vanishing in Image-Text Matching0
Step-Wise Hierarchical Alignment Network for Image-Text Matching0
SyncMask: Synchronized Attentional Masking for Fashion-centric Vision-Language Pretraining0
TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP0
Towards Deconfounded Image-Text Matching with Causal Inference0
Towards Efficient Cross-Modal Visual Textual Retrieval using Transformer-Encoder Deep Features0
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models0
Two-stream Hierarchical Similarity Reasoning for Image-text Matching0
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training0
UFO: A UniFied TransfOrmer for Vision-Language Representation Learning0
Dynamic Visual Semantic Sub-Embeddings and Fast Re-Ranking0
Uncertainty-based Cross-Modal Retrieval with Probabilistic Representations0
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training0
Learning Visual Relation Priors for Image-Text Matching and Image Captioning with Neural Scene Graph Generators0
Uniformly Distributed Category Prototype-Guided Vision-Language Framework for Long-Tail Recognition0
Uniform Masking Prevails in Vision-Language Pretraining0
UNITER: Learning UNiversal Image-TExt Representations0
Unpaired Referring Expression Grounding via Bidirectional Cross-Modal Matching0
UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance0
ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
VL-Match: Enhancing Vision-Language Pretraining with Token-Level and Instance-Level Matching0
VLM-HOI: Vision Language Models for Interpretable Human-Object Interaction Analysis0
VL-NMS: Breaking Proposal Bottlenecks in Two-Stage Visual-Language Matching0
Contrastive Cross-Modal Pre-Training: A General Strategy for Small Sample Medical Imaging0
Weakly Supervised Referring Image Segmentation with Intra-Chunk and Inter-Chunk Consistency0
Show:102550
← PrevPage 4 of 4Next →

No leaderboard results yet.