SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 201250 of 522 papers

TitleStatusHype
Leveraging Chemistry Foundation Models to Facilitate Structure Focused Retrieval Augmented Generation in Multi-Agent Workflows for Catalyst and Materials Design0
Cross-Modal Discrete Representation Learning0
BagFormer: Better Cross-Modal Retrieval via bag-wise interaction0
Cross-modal Deep Metric Learning with Multi-task Regularization0
FaD-VLP: Fashion Vision-and-Language Pre-training towards Unified Retrieval and Captioning0
Cross-Modal Coordination Across a Diverse Set of Input Modalities0
Extending Cross-Modal Retrieval with Interactive Learning to Improve Image Retrieval Performance in Forensics0
Cross-modal Common Representation Learning by Hybrid Transfer Network0
All in One Framework for Multimodal Re-identification in the Wild0
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings0
Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays0
Fine-grained Prototypical Voting with Heterogeneous Mixup for Semi-supervised 2D-3D Cross-modal Retrieval0
Exploring Optimal Transport-Based Multi-Grained Alignments for Text-Molecule Retrieval0
Cross-Modal Center Loss for 3D Cross-Modal Retrieval0
Cross-modal Center Loss0
FLEX-CLIP: Feature-Level GEneration Network Enhanced CLIP for X-shot Cross-modal Retrieval0
Exploiting Transformation Invariance and Equivariance for Self-supervised Sound Localisation0
FOLIAGE: Towards Physical Intelligence World Models Via Unbounded Surface Evolution0
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
Fusion-supervised Deep Cross-modal Hashing0
Cross-Modal and Multimodal Data Analysis Based on Functional Mapping of Spectral Descriptors and Manifold Regularization0
Cross-Modal 3D Representation with Multi-View Images and Point Clouds0
A Unified Optimal Transport Framework for Cross-Modal Retrieval with Noisy Labels0
Semi-Supervised Cross-Modal Retrieval with Label Prediction0
Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond0
Learning with Noisy Correspondence0
Global–Local Information Soft-Alignment for Cross-Modal Remote-Sensing Image–Text Retrieval0
Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval0
Bridging Information Asymmetry in Text-video Retrieval: A Data-centric Approach0
Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval0
Enhancing medical vision-language contrastive learning via inter-matching relation modelling0
Cross-Modal Retrieval Meets Inference:Improving Zero-Shot Classification with Cross-Modal Retrieval0
HashGAN:Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval0
Cross-Media Scientific Research Achievements Retrieval Based on Deep Language Model0
HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval0
Emphasizing Complementary Samples for Non-literal Cross-modal Retrieval0
Cross-modal Retrieval with Improved Graph Convolution0
EmotionRankCLAP: Bridging Natural Language Speaking Styles and Ordinal Speech Emotion via Rank-N-Contrast0
Attribute-Guided Network for Cross-Modal Zero-Shot Hashing0
Cross-lingual Cross-modal Pretraining for Multimodal Retrieval0
Attention-aware Deep Adversarial Hashing for Cross-Modal Retrieval0
EI-CLIP: Entity-Aware Interventional Contrastive Learning for E-Commerce Cross-Modal Retrieval0
CoVLR: Coordinating Cross-Modal Consistency and Intra-Modal Structure for Vision-Language Retrieval0
Improved Text-Image Matching by Mitigating Visual Semantic Hubs0
Learning Similarity Preserving Binary Codes for Recommender Systems0
Improving Factuality of 3D Brain MRI Report Generation with Paired Image-domain Retrieval and Text-domain Augmentation0
Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval0
Efficient Discrete Supervised Hashing for Large-scale Cross-modal Retrieval0
Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified