SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 101150 of 522 papers

TitleStatusHype
Deep Sketched Output Kernel Regression for Structured PredictionCode0
What If We Recaption Billions of Web Images with LLaMA-3?0
Merlin: A Vision Language Foundation Model for 3D Computed TomographyCode3
Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and LanguageCode2
No Captions, No Problem: Captionless 3D-CLIP Alignment with Hard Negatives via CLIP Knowledge and LLMs0
Multi-Modal Generative Embedding Model0
CaLa: Complementary Association Learning for Augmenting Composed Image RetrievalCode1
RREH: Reconstruction Relations Embedded Hashing for Semi-Paired Cross-Modal Retrieval0
Distilling Vision-Language Pretraining for Efficient Cross-Modal Retrieval0
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models0
MVBIND: Self-Supervised Music Recommendation For Videos Via Embedding Space Binding0
Global–Local Information Soft-Alignment for Cross-Modal Remote-Sensing Image–Text Retrieval0
All in One Framework for Multimodal Re-identification in the Wild0
COM3D: Leveraging Cross-View Correspondence and Cross-Modal Mining for 3D Retrieval0
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models0
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-HighlightingCode0
Anchor-aware Deep Metric Learning for Audio-visual Retrieval0
Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding0
Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation LearningCode0
Knowledge-enhanced Visual-Language Pretraining for Computational PathologyCode1
Bridging Vision and Language Spaces with Assignment PredictionCode0
Learning with Noisy Correspondence0
Cross-modal Retrieval with Noisy Correspondence via Consistency Refining and MiningCode1
VXP: Voxel-Cross-Pixel Large-scale Image-LiDAR Place RecognitionCode1
A Unified Optimal Transport Framework for Cross-Modal Retrieval with Noisy Labels0
Improving Medical Multi-modal Contrastive Learning with Expert AnnotationsCode0
Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluationCode2
Learning to Rematch Mismatched Pairs for Robust Cross-Modal RetrievalCode1
Large Language Models are In-Context Molecule LearnersCode2
Tri-Modal Motion Retrieval by Learning a Joint Embedding Space0
Impression-CLIP: Contrastive Shape-Impression Embedding for FontsCode0
Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP Guided Reinforcement LearningCode1
Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond0
Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment0
Large Language Models for Captioning and Retrieving Remote Sensing Images0
Zero-shot sketch-based remote sensing image retrieval based on multi-level and attention-guided tokenizationCode0
Cross-Modal Coordination Across a Diverse Set of Input Modalities0
Enhancing medical vision-language contrastive learning via inter-matching relation modelling0
Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering0
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal AlignmentCode2
Fine-grained Prototypical Voting with Heterogeneous Mixup for Semi-supervised 2D-3D Cross-modal Retrieval0
Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation0
LeanVec: Searching vectors faster by making them fitCode2
Masked Contrastive Reconstruction for Cross-modal Medical Image-Report Retrieval0
SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote SensingCode2
TF-CLIP: Learning Text-free CLIP for Video-based Person Re-IdentificationCode1
CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer0
WikiMuTe: A web-sourced dataset of semantic descriptions for music audio0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified