SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 150 of 188 papers

TitleStatusHype
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP0
Descriptive Image-Text Matching with Graded Contextual Similarity0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Dependency Structure Augmented Contextual Scoping Framework for Multimodal Aspect-Based Sentiment Analysis0
Aligning Information Capacity Between Vision and Language via Dense-to-Sparse Feature Distillation for Image-Text MatchingCode2
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
MedUnifier: Unifying Vision-and-Language Pre-training on Medical Data with Vision Generation Task using Discrete Visual Representations0
IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Primary Word EmphasisCode1
ReCon: Enhancing True Correspondence Discrimination through Relation Consistency for Robust Noisy Correspondence LearningCode1
CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object RepresentationCode1
Object-centric Binding in Contrastive Language-Image Pretraining0
MASS: Overcoming Language Bias in Image-Text Matching0
FiLo++: Zero-/Few-Shot Anomaly Detection by Fused Fine-Grained Descriptions and Deformable LocalizationCode2
Learning Textual Prompts for Open-World Semi-Supervised Learning0
Multi-Head Attention Driven Dynamic Visual-Semantic Embedding for Enhanced Image-Text Matching0
A Concept-Centric Approach to Multi-Modality Learning0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Automatic Prompt Generation and Grounding Object Detection for Zero-Shot Image Anomaly Detection0
VLM-HOI: Vision Language Models for Interpretable Human-Object Interaction Analysis0
EntityCLIP: Entity-Centric Image-Text Matching via Multimodal Attentive Contrastive Learning0
Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
Towards Deconfounded Image-Text Matching with Causal Inference0
Dynamic and Compressive Adaptation of Transformers From Images to Videos0
Image-text matching for large-scale book collectionsCode1
UGNCL: Uncertainty-Guided Noisy Correspondence Learning for Efficient Cross-Modal MatchingCode1
Efficient and Long-Tailed Generalization for Pre-trained Vision-Language ModelCode0
Generative Visual Instruction TuningCode0
Composing Object Relations and Attributes for Image-Text MatchingCode1
Advanced Multimodal Deep Learning Architecture for Image-Text Matching0
Hire: Hybrid-modal Interaction with Multiple Relational Enhancements for Image-Text Matching0
DEMO: A Statistical Perspective for Efficient Image-Text Matching0
CLIP-Powered TASS: Target-Aware Single-Stream Network for Audio-Visual Question Answering0
RETTA: Retrieval-Enhanced Test-Time Adaptation for Zero-Shot Video Captioning0
Breaking Through the Noisy Correspondence: A Robust Model for Image-Text Matching0
Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text MatchingCode1
SyncMask: Synchronized Attentional Masking for Fashion-centric Vision-Language Pretraining0
Constructing Multilingual Visual-Text Datasets Revealing Visual Multilingual Ability of Vision Language Models0
FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint Textual and Visual Clues0
RadCLIP: Enhancing Radiologic Image Analysis through Contrastive Language-Image Pre-trainingCode1
MAGID: An Automated Pipeline for Generating Synthetic Multi-modal DatasetsCode0
Image-Text Matching with Multi-View Attention0
ColorSwap: A Color and Word Order Dataset for Multimodal EvaluationCode1
MouSi: Poly-Visual-Expert Vision-Language ModelsCode2
Beyond Image-Text Matching: Verb Understanding in Multimodal Transformers Using Guided MaskingCode0
Enhancing Image-Text Matching with Adaptive Feature AggregationCode0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.