SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 125 of 188 papers

TitleStatusHype
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP0
Descriptive Image-Text Matching with Graded Contextual Similarity0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Dependency Structure Augmented Contextual Scoping Framework for Multimodal Aspect-Based Sentiment Analysis0
Aligning Information Capacity Between Vision and Language via Dense-to-Sparse Feature Distillation for Image-Text MatchingCode2
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Primary Word EmphasisCode1
MedUnifier: Unifying Vision-and-Language Pre-training on Medical Data with Vision Generation Task using Discrete Visual Representations0
CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object RepresentationCode1
ReCon: Enhancing True Correspondence Discrimination through Relation Consistency for Robust Noisy Correspondence LearningCode1
Object-centric Binding in Contrastive Language-Image Pretraining0
MASS: Overcoming Language Bias in Image-Text Matching0
FiLo++: Zero-/Few-Shot Anomaly Detection by Fused Fine-Grained Descriptions and Deformable LocalizationCode2
Learning Textual Prompts for Open-World Semi-Supervised Learning0
Multi-Head Attention Driven Dynamic Visual-Semantic Embedding for Enhanced Image-Text Matching0
A Concept-Centric Approach to Multi-Modality Learning0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Automatic Prompt Generation and Grounding Object Detection for Zero-Shot Image Anomaly Detection0
VLM-HOI: Vision Language Models for Interpretable Human-Object Interaction Analysis0
EntityCLIP: Entity-Centric Image-Text Matching via Multimodal Attentive Contrastive Learning0
Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Show:102550
← PrevPage 1 of 8Next →

No leaderboard results yet.