SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 301350 of 522 papers

TitleStatusHype
EI-CLIP: Entity-Aware Interventional Contrastive Learning for E-Commerce Cross-Modal Retrieval0
Cross Modal Retrieval with Querybank NormalisationCode1
Fusion and Orthogonal Projection for Improved Face-Voice AssociationCode1
CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising0
Multi-Modal Mutual Information Maximization: A Novel Approach for Unsupervised Deep Cross-Modal Hashing0
Variational Autoencoder with CCA for Audio-Visual Cross-Modal Retrieval0
Learning with Noisy Correspondence for Cross-modal MatchingCode1
Emotion Embedding Spaces for Matching Music to StoriesCode1
Florence: A New Foundation Model for Computer VisionCode1
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual ConceptsCode1
SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval0
The Curious Layperson: Fine-Grained Image Recognition without Expert LabelsCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
Inflate and Shrink:Enriching and Reducing Interactions for Fast Text-Image Retrieval0
Text2Mol: Cross-Modal Molecule Retrieval with Natural Language QueriesCode1
MURAL: Multimodal, Multitask Representations Across Languages0
BiC-Net: Learning Efficient Spatio-Temporal Relation for Text-Video RetrievalCode1
Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature EngineeringCode0
Wav2CLIP: Learning Robust Audio Representations From CLIPCode1
Text-Based Person Search with Limited DataCode1
VLDeformer: Vision-Language Decomposed Transformer for Fast Cross-Modal Retrieval0
Learning Structural Representations for Recipe Generation and Food Retrieval0
Self-Supervised Modality-Invariant and Modality-Specific Feature Learning for 3D Objects0
Calibrating Probabilistic Embeddings for Cross-Modal Retrieval0
MURAL: Multimodal, Multitask Retrieval Across Languages0
EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling0
X-modaler: A Versatile and High-performance Codebase for Cross-modal AnalyticsCode1
Learning Joint Embedding with Modality Alignments for Cross-Modal Retrieval of Recipes and Food Images0
Adaptive label-aware graph convolutional networks for cross-modal retrievalCode1
Learning TFIDF Enhanced Joint Embedding for Recipe-Image Cross-Modal Retrieval ServiceCode0
Self-supervised Audiovisual Representation Learning for Remote Sensing DataCode1
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
Dynamic Modality Interaction Modeling for Image-Text RetrievalCode1
Evaluation of Audio-Visual Alignments in Visually Grounded Speech ModelsCode0
FedCMR: Federated Cross-Modal RetrievalCode1
OPT: Omni-Perception Pre-Trainer for Cross-Modal Understanding and GenerationCode0
Graph Pattern Loss based Diversified Attention Network for Cross-Modal Retrieval0
Domain-Smoothing Network for Zero-Shot Sketch-Based Image RetrievalCode1
Learning Cross-Modal Retrieval With Noisy LabelsCode1
Cross-Modal Center Loss for 3D Cross-Modal Retrieval0
Multi-Modal Relational Graph for Cross-Modal Video Moment Retrieval0
Cross-Modal Discrete Representation Learning0
Exploring modality-agnostic representations for music classificationCode0
Cross-lingual Cross-modal Pretraining for Multimodal Retrieval0
Towards Efficient Cross-Modal Visual Textual Retrieval using Transformer-Encoder Deep Features0
Learning Relation Alignment for Calibrated Cross-modal RetrievalCode1
More Than Just Attention: Improving Cross-Modal Attentions with Contrastive Constraints for Image-Text Matching0
Dual adversarial graph neural networks for multi-label cross-modal retrievalCode1
Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching0
FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal RetrievalCode0
Show:102550
← PrevPage 7 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified