SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 76100 of 188 papers

TitleStatusHype
Stacked Cross Attention for Image-Text MatchingCode1
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial NetworksCode1
TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP0
Descriptive Image-Text Matching with Graded Contextual Similarity0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Dependency Structure Augmented Contextual Scoping Framework for Multimodal Aspect-Based Sentiment Analysis0
MedUnifier: Unifying Vision-and-Language Pre-training on Medical Data with Vision Generation Task using Discrete Visual Representations0
Object-centric Binding in Contrastive Language-Image Pretraining0
MASS: Overcoming Language Bias in Image-Text Matching0
Learning Textual Prompts for Open-World Semi-Supervised Learning0
Multi-Head Attention Driven Dynamic Visual-Semantic Embedding for Enhanced Image-Text Matching0
A Concept-Centric Approach to Multi-Modality Learning0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Automatic Prompt Generation and Grounding Object Detection for Zero-Shot Image Anomaly Detection0
VLM-HOI: Vision Language Models for Interpretable Human-Object Interaction Analysis0
EntityCLIP: Entity-Centric Image-Text Matching via Multimodal Attentive Contrastive Learning0
Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
Towards Deconfounded Image-Text Matching with Causal Inference0
Dynamic and Compressive Adaptation of Transformers From Images to Videos0
Efficient and Long-Tailed Generalization for Pre-trained Vision-Language ModelCode0
Generative Visual Instruction TuningCode0
Show:102550
← PrevPage 4 of 8Next →

No leaderboard results yet.