SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 451500 of 522 papers

TitleStatusHype
UniVSE: Robust Visual Semantic Embeddings via Structured Semantic RepresentationsCode1
Context-Aware Embeddings for Automatic Art AnalysisCode0
CMIR-NET : A Deep Learning Based Model For Cross-Modal Retrieval In Remote SensingCode0
Triplet-Based Deep Hashing Network for Cross-Modal Retrieval0
Unsupervised Multi-modal Hashing for Cross-modal retrieval0
Learning Embodied Semantics via Music and Dance Semiotic Correlations0
Show, Translate and TellCode0
Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval0
Cross-Modal Music Retrieval and Applications: An Overview of Key Methodologies0
Self-Supervised Visual Representations for Cross-Modal Retrieval0
Deep Semantic Multimodal Hashing Network for Scalable Image-Text and Video-Text Retrievals0
Deep Semantic Correlation Learning Based Hashing for Multimedia Cross-Modal Retrieval0
Discriminative Supervised Hashing for Cross-Modal similarity Search0
Semi-Supervised Cross-Modal Retrieval with Label Prediction0
Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences0
Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images0
Dense Multimodal Fusion for Hierarchically Joint Representation0
Webly Supervised Joint Embedding for Cross-Modal lmage-Text Retrieval0
Perfect match: Improved cross-modal embeddings for audio-visual synchronisation0
Attention-aware Deep Adversarial Hashing for Cross-Modal Retrieval0
Deep Cross-Modal Projection Learning for Image-Text MatchingCode0
Cross-Modal Hamming Hashing0
Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval0
Learning Discriminative Hashing Codes for Cross-Modal Retrieval based on Multi-view Features0
Revisiting Cross Modal Retrieval0
Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data0
MTFH: A Matrix Tri-Factorization Hashing Framework for Efficient Cross-Modal RetrievalCode0
Learnable PINs: Cross-Modal Embeddings for Person IdentityCode0
Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval0
Cross-Modal Retrieval in the Cooking Context: Learning Semantic Text-Image EmbeddingsCode0
Cross-Modal Retrieval with Implicit Concept Association0
Finding beans in burgers: Deep semantic-visual embedding with localizationCode0
Self-Supervised Adversarial Hashing Networks for Cross-Modal RetrievalCode0
Stacked Cross Attention for Image-Text MatchingCode1
Attribute-Guided Network for Cross-Modal Zero-Shot Hashing0
Objects that Sound0
Towards Deep Modeling of Music Semantics using EEG Regularizers0
Learning Semantic Concepts and Order for Image and Sentence Matching0
Unsupervised Generative Adversarial Cross-modal Hashing0
HashGAN:Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval0
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models0
Dual-Path Convolutional Image-Text Embeddings with Instance LossCode0
CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning0
Multimodal Gaussian Process Latent Variable Models With Harmonization0
Deep Binary Reconstruction for Cross-modal HashingCode0
Modality-specific Cross-modal Similarity Measurement with Recurrent Attention NetworkCode0
Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval0
MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval0
VSE++: Improving Visual-Semantic Embeddings with Hard NegativesCode1
Online Asymmetric Similarity Learning for Cross-Modal Retrieval0
Show:102550
← PrevPage 10 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified