SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 351400 of 522 papers

TitleStatusHype
Learning Soft-Attention Models for Tempo-invariant Audio-Sheet Music Retrieval0
Learning Sparse Disentangled Representations for Multimodal Exclusion Retrieval0
Learning Structural Representations for Recipe Generation and Food Retrieval0
Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation0
Objects that Sound0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
OmniVL:One Foundation Model for Image-Language and Video-Language Tasks0
Online Asymmetric Similarity Learning for Cross-Modal Retrieval0
On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation0
Paired Cross-Modal Data Augmentation for Fine-Grained Image-to-Text Retrieval0
Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval0
PATFinger: Prompt-Adapted Transferable Fingerprinting against Unauthorized Multimodal Dataset Usage0
Pathology Report Generation and Multimodal Representation Learning for Cutaneous Melanocytic Lesions0
Perfect match: Improved cross-modal embeddings for audio-visual synchronisation0
PiTL: Cross-modal Retrieval with Weakly-supervised Vision-language Pre-training via Prompting0
Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images0
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval0
Progressive Domain-Independent Feature Decomposition Network for Zero-Shot Sketch-Based Image Retrieval0
Ranking-based Deep Cross-modal Hashing0
Rebalanced Vision-Language Retrieval Considering Structure-Aware Distillation0
Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images0
Retrieval-based Disentangled Representation Learning with Natural Language Supervision0
Retrieving and Highlighting Action with Spatiotemporal Reference0
Revisiting Cross Modal Retrieval0
Revolutionizing Text-to-Image Retrieval as Autoregressive Token-to-Voken Generation0
RREH: Reconstruction Relations Embedded Hashing for Semi-Paired Cross-Modal Retrieval0
Sample-Specific Debiasing for Better Image-Text Models0
SA-Person: Text-Based Person Retrieval with Scene-aware Re-ranking0
Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping0
Scale-Semantic Joint Decoupling Network for Image-text Retrieval in Remote Sensing0
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
Seeing Speech and Sound: Distinguishing and Locating Audios in Visual Scenes0
Seeing Speech and Sound: Distinguishing and Locating Audio Sources in Visual Scenes0
See What You See: Self-supervised Cross-modal Retrieval of Visual Stimuli from Brain Activity0
Self-supervised Modal and View Invariant Feature Learning0
Self-Supervised Modality-Invariant and Modality-Specific Feature Learning for 3D Objects0
Self-Supervised Visual Representations for Cross-Modal Retrieval0
Semantic Adversarial Network for Zero-Shot Sketch-Based Image Retrieval0
Semantic Compositions Enhance Vision-Language Contrastive Learning0
SemCORE: A Semantic-Enhanced Generative Cross-Modal Retrieval Framework with MLLMs0
Simple to Complex Cross-modal Learning to Rank0
Snap and Diagnose: An Advanced Multimodal Retrieval System for Identifying Plant Diseases in the Wild0
Sound Source Localization is All about Cross-Modal Alignment0
Start from Video-Music Retrieval: An Inter-Intra Modal Loss for Cross Modal Retrieval0
SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval0
T3D: Advancing 3D Medical Vision-Language Pre-training by Learning Multi-View Visual Consistency0
Task-adaptive Asymmetric Deep Cross-modal Hashing0
Learning Joint Embedding for Cross-Modal Retrieval0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified