SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 351400 of 522 papers

TitleStatusHype
Data leakage in cross-modal retrieval training: A case study0
Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval0
Deep Cross-modal Hashing via Margin-dynamic-softmax Loss0
Deep Latent Space Learning for Cross-modal Mapping of Audio and Visual Signals0
Deep Lifelong Cross-modal Hashing0
Deep Manifold Hashing: A Divide-and-Conquer Approach for Semi-Paired Unsupervised Cross-Modal Retrieval0
Deep Multimodal Image-Text Embeddings for Automatic Cross-Media Retrieval0
Deep Robust Multilevel Semantic Cross-Modal Hashing0
Deep Semantic Correlation Learning Based Hashing for Multimedia Cross-Modal Retrieval0
Deep Semantic Multimodal Hashing Network for Scalable Image-Text and Video-Text Retrievals0
Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis0
Deep Unified Multimodal Embeddings for Understanding both Content and Users in Social Media Networks0
Deep Unsupervised Contrastive Hashing for Large-Scale Cross-Modal Text-Image Retrieval in Remote Sensing0
Dense Multimodal Fusion for Hierarchically Joint Representation0
Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering0
Direction-Oriented Visual-semantic Embedding Model for Remote Sensing Image-text Retrieval0
Discriminative Semantic Transitive Consistency for Cross-Modal Learning0
Discriminative Supervised Hashing for Cross-Modal similarity Search0
Discriminative Supervised Subspace Learning for Cross-modal Retrieval0
Disentangled Noisy Correspondence Learning0
Distilling Vision-Language Pretraining for Efficient Cross-Modal Retrieval0
Distribution Aligned Feature Clustering for Zero-Shot Sketch-Based Image Retrieval0
Dividing and Conquering Cross-Modal Recipe Retrieval: from Nearest Neighbours Baselines to SoTA0
Do Cross Modal Systems Leverage Semantic Relationships?0
Dual-view Curricular Optimal Transport for Cross-lingual Cross-modal Retrieval0
Efficient and Versatile Robust Fine-Tuning of Zero-shot Models0
EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling0
Efficient Discrete Supervised Hashing for Large-scale Cross-modal Retrieval0
EI-CLIP: Entity-Aware Interventional Contrastive Learning for E-Commerce Cross-Modal Retrieval0
EmotionRankCLAP: Bridging Natural Language Speaking Styles and Ordinal Speech Emotion via Rank-N-Contrast0
Emphasizing Complementary Samples for Non-literal Cross-modal Retrieval0
Enhancing medical vision-language contrastive learning via inter-matching relation modelling0
ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
Exploiting Transformation Invariance and Equivariance for Self-supervised Sound Localisation0
Exploring Optimal Transport-Based Multi-Grained Alignments for Text-Molecule Retrieval0
Extending Cross-Modal Retrieval with Interactive Learning to Improve Image Retrieval Performance in Forensics0
FaD-VLP: Fashion Vision-and-Language Pre-training towards Unified Retrieval and Captioning0
FedNano: Toward Lightweight Federated Tuning for Pretrained Multimodal Large Language Models0
FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity0
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings0
Fine-Grained Instance-Level Sketch-Based Video Retrieval0
Fine-grained Prototypical Voting with Heterogeneous Mixup for Semi-supervised 2D-3D Cross-modal Retrieval0
FineLIP: Extending CLIP's Reach via Fine-Grained Alignment with Longer Text Inputs0
FLEX-CLIP: Feature-Level GEneration Network Enhanced CLIP for X-shot Cross-modal Retrieval0
FOLIAGE: Towards Physical Intelligence World Models Via Unbounded Surface Evolution0
Fusing Physics-Driven Strategies and Cross-Modal Adversarial Learning: Toward Multi-Domain Applications0
Fusion-supervised Deep Cross-modal Hashing0
Generalized Multi-view Embedding for Visual Recognition and Cross-modal Retrieval0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified