SOTAVerified

Image-text matching

Image-Text Matching is a subtask within Cross-Modal Retrieval (CMR) that involves establishing associations between images and corresponding textual descriptions. The goal is to retrieve an image given a textual query or, conversely, retrieve a textual description given an image query. This task is challenging due to the heterogeneity gap between image and text data representations. Image-text matching is used in applications such as content-based image search, visual question answering, and multimodal summarization.

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

Papers

Showing 2650 of 188 papers

TitleStatusHype
GRIT-VLP: Grouped Mini-batch Sampling for Efficient Vision and Language Pre-trainingCode1
MedICaT: A Dataset of Medical Images, Captions, and Textual ReferencesCode1
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object RepresentationCode1
ColorSwap: A Color and Word Order Dataset for Multimodal EvaluationCode1
Learning Dual Semantic Relations with Graph Attention for Image-Text MatchingCode1
ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCOCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
Consensus-Aware Visual-Semantic Embedding for Image-Text MatchingCode1
Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language LearnersCode1
Cross-modal Active Complementary Learning with Self-refining CorrespondenceCode1
IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Primary Word EmphasisCode1
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial NetworksCode1
Composing Object Relations and Attributes for Image-Text MatchingCode1
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis EvaluationCode1
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
ComCLIP: Training-Free Compositional Image and Text MatchingCode1
Adaptive Offline Quintuplet Loss for Image-Text MatchingCode1
Deep Multimodal Neural Architecture SearchCode1
BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity ConsistencyCode1
Are Diffusion Models Vision-And-Language Reasoners?Code1
DenseCLIP: Language-Guided Dense Prediction with Context-Aware PromptingCode1
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP for Generic Natural Visual Stimulus DecodingCode1
Learning with Noisy Correspondence for Cross-modal MatchingCode1
Show:102550
← PrevPage 2 of 8Next →

No leaderboard results yet.