SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 125 of 188 papers

TitleStatusHype
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and GenerationCode5
Oscar: Object-Semantics Aligned Pre-training for Vision-Language TasksCode2
MouSi: Poly-Visual-Expert Vision-Language ModelsCode2
A Systematic Survey of Prompt Engineering on Vision-Language Foundation ModelsCode2
Aligning Information Capacity Between Vision and Language via Dense-to-Sparse Feature Distillation for Image-Text MatchingCode2
Language Models Can See: Plugging Visual Controls in Text GenerationCode2
FiLo++: Zero-/Few-Shot Anomaly Detection by Fused Fine-Grained Descriptions and Deformable LocalizationCode2
VinVL: Revisiting Visual Representations in Vision-Language ModelsCode2
Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person RetrievalCode2
Deep Multimodal Neural Architecture SearchCode1
A Deep Local and Global Scene-Graph Matching for Image-Text RetrievalCode1
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal RetrievalCode1
DenseCLIP: Language-Guided Dense Prediction with Context-Aware PromptingCode1
Adaptive Offline Quintuplet Loss for Image-Text MatchingCode1
Advancing Visual Grounding with Scene Knowledge: Benchmark and MethodCode1
Cross-modal Active Complementary Learning with Self-refining CorrespondenceCode1
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
Consensus-Aware Visual-Semantic Embedding for Image-Text MatchingCode1
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial NetworksCode1
BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity ConsistencyCode1
ComCLIP: Training-Free Compositional Image and Text MatchingCode1
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP for Generic Natural Visual Stimulus DecodingCode1
CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object RepresentationCode1
Show:102550
← PrevPage 1 of 8Next →

No leaderboard results yet.