SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 150 of 522 papers

TitleStatusHype
ImageBind: One Embedding Space To Bind Them AllCode5
Multimodal Whole Slide Foundation Model for PathologyCode4
AltCLIP: Altering the Language Encoder in CLIP for Extended Language CapabilitiesCode4
AToMiC: An Image/Text Retrieval Test Collection to Support Multimedia Content CreationCode3
Merlin: A Vision Language Foundation Model for 3D Computed TomographyCode3
Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal AlignmentCode2
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and DatasetCode2
SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote SensingCode2
Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert ReasonerCode2
X^2-VLM: All-In-One Pre-trained Model For Vision-Language TasksCode2
Composed Multi-modal Retrieval: A Survey of Approaches and ApplicationsCode2
Derm1M: A Million-scale Vision-Language Dataset Aligned with Clinical Ontology Knowledge for DermatologyCode2
PoseScript: Linking 3D Human Poses and Natural LanguageCode2
EyeCLIP: A visual-language foundation model for multi-modal ophthalmic image analysisCode2
Comprehending and Ordering Semantics for Image CaptioningCode2
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text SupervisionCode2
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge AugmentationCode2
Semantic-Conditional Diffusion Networks for Image CaptioningCode2
Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluationCode2
Vision-Language Pre-Training with Triple Contrastive LearningCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image RetrievalCode2
MolFM: A Multimodal Molecular Foundation ModelCode2
Large Language Models are In-Context Molecule LearnersCode2
LeanVec: Searching vectors faster by making them fitCode2
VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and DatasetCode2
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote SensingCode2
RemoteCLIP: A Vision Language Foundation Model for Remote SensingCode2
Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and LanguageCode2
Oscar: Object-Semantics Aligned Pre-training for Vision-Language TasksCode2
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and BenchmarksCode2
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural LanguageCode1
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal RetrievalCode1
Cross-Modal Retrieval for Motion and Text via DopTriple LossCode1
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion TasksCode1
Adaptive label-aware graph convolutional networks for cross-modal retrievalCode1
Cross-Modal Fusion Distillation for Fine-Grained Sketch-Based Image RetrievalCode1
COOT: Cooperative Hierarchical Transformer for Video-Text Representation LearningCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Cross-Lingual Cross-Modal Retrieval with Noise-Robust LearningCode1
Cross-Modal Retrieval: A Systematic Review of Methods and Future DirectionsCode1
FashionBERT: Text and Image Matching with Adaptive Loss for Cross-modal RetrievalCode1
Dual adversarial graph neural networks for multi-label cross-modal retrievalCode1
Dynamic Modality Interaction Modeling for Image-Text RetrievalCode1
Domain-Smoothing Network for Zero-Shot Sketch-Based Image RetrievalCode1
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
Aligning Sight and Sound: Advanced Sound Source Localization Through Audio-Visual AlignmentCode1
A Survey on Interpretable Cross-modal ReasoningCode1
Show:102550
← PrevPage 1 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified