SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 201250 of 522 papers

TitleStatusHype
PromptHash: Affinity-Prompted Collaborative Cross-Modal Learning for Adaptive Hashing RetrievalCode0
COOKIE: Contrastive Cross-Modal Knowledge Sharing Pre-Training for Vision-Language RepresentationCode0
Dual-Path Convolutional Image-Text Embeddings with Instance LossCode0
Contrastive Transformer Learning with Proximity Data Generation for Text-Based Person SearchCode0
NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal EmbeddingsCode0
DocMMIR: A Framework for Document Multi-modal Information RetrievalCode0
MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal RepresentationsCode0
NeighborRetr: Balancing Hub Centrality in Cross-Modal RetrievalCode0
Multimodal LLM Enhanced Cross-lingual Cross-modal RetrievalCode0
Dissecting Deep Metric Learning Losses for Image-Text RetrievalCode0
3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-HighlightingCode0
ContextRefine-CLIP for EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge 2025Code0
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and RetrievalCode0
Context-Aware Embeddings for Automatic Art AnalysisCode0
MuLan: A Joint Embedding of Music Audio and Natural LanguageCode0
Multilingual Vision-Language Pre-training for the Remote Sensing DomainCode0
Content-Based Video-Music Retrieval Using Soft Intra-Modal Structure ConstraintCode0
DIME: An Online Tool for the Visual Comparison of Cross-Modal Retrieval ModelsCode0
ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion MapCode0
Modality-specific Cross-modal Similarity Measurement with Recurrent Attention NetworkCode0
Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated ImagesCode0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Deep Visual-Semantic Alignments for Generating Image DescriptionsCode0
Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal RetrievalCode0
Leveraging Acoustic Images for Effective Self-Supervised Audio Representation LearningCode0
Deep Supervised Cross-Modal RetrievalCode0
Learning Visual Actions Using Multiple Verb-Only LabelsCode0
Deep Sketched Output Kernel Regression for Structured PredictionCode0
Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature EngineeringCode0
Towards Cross-Modal Text-Molecule Retrieval with Better Modality AlignmentCode0
Learning TFIDF Enhanced Joint Embedding for Recipe-Image Cross-Modal Retrieval ServiceCode0
Deep Reversible Consistency Learning for Cross-modal RetrievalCode0
Adversarial Modality Alignment Network for Cross-Modal Molecule RetrievalCode0
CMIR-NET : A Deep Learning Based Model For Cross-Modal Retrieval In Remote SensingCode0
Deep Joint-Semantics Reconstructing Hashing for Large-Scale Unsupervised Cross-Modal RetrievalCode0
Learnable PINs: Cross-Modal Embeddings for Person IdentityCode0
Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food ImagesCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
Deep Cross-Modal Projection Learning for Image-Text MatchingCode0
Deep Cross-Modal HashingCode0
InvGC: Robust Cross-Modal Retrieval by Inverse Graph ConvolutionCode0
Deep Class-guided Hashing for Multi-label Cross-modal RetrievalCode0
Deep Binary Reconstruction for Cross-modal HashingCode0
DAC: 2D-3D Retrieval with Noisy Labels via Divide-and-Conquer Alignment and CorrectionCode0
Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive LearningCode0
Intra-Modal Constraint Loss For Image-Text RetrievalCode0
Language-Agnostic Visual-Semantic EmbeddingsCode0
MTFH: A Matrix Tri-Factorization Hashing Framework for Efficient Cross-Modal RetrievalCode0
CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features0
Cross-View Image Retrieval -- Ground to Aerial Image Retrieval through Deep Learning0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified