SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 251300 of 522 papers

TitleStatusHype
FaD-VLP: Fashion Vision-and-Language Pre-training towards Unified Retrieval and Captioning0
Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision0
Dissecting Deep Metric Learning Losses for Image-Text RetrievalCode0
PoseScript: Linking 3D Human Poses and Natural LanguageCode2
Cross-Modal Fusion Distillation for Fine-Grained Sketch-Based Image RetrievalCode1
Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion0
Deep Evidential Learning with Noisy Correspondence for Cross-Modal RetrievalCode1
ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-trainingCode0
Text-Adaptive Multiple Visual Prototype Matching for Video-Text Retrieval0
Information-Theoretic Hashing for Zero-Shot Cross-Modal Retrieval0
Deep Manifold Hashing: A Divide-and-Conquer Approach for Semi-Paired Unsupervised Cross-Modal Retrieval0
OmniVL:One Foundation Model for Image-Language and Video-Language Tasks0
Learning to Evaluate Performance of Multi-modal Semantic LocalizationCode1
A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural LanguageCode1
A Channel Mix Method for Fine-Grained Cross-Modal RetrievalCode0
Cross-Lingual Cross-Modal Retrieval with Noise-Robust LearningCode1
MuLan: A Joint Embedding of Music Audio and Natural LanguageCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
See What You See: Self-supervised Cross-modal Retrieval of Visual Stimuli from Brain Activity0
Learning Modal-Invariant and Temporal-Memory for Video-based Visible-Infrared Person Re-IdentificationCode1
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and RetrievalCode0
Paired Cross-Modal Data Augmentation for Fine-Grained Image-to-Text Retrieval0
Adaptive Asymmetric Label-guided Hashing for Multimedia Search0
Intra-Modal Constraint Loss For Image-Text RetrievalCode0
Integrating multi-label contrastive learning with dual adversarial graph neural networks for cross-modal retrievalCode1
Contrastive Cross-Modal Knowledge Sharing Pre-training for Vision-Language Representation Learning and Retrieval0
Exploiting Transformation Invariance and Equivariance for Self-supervised Sound Localisation0
Emphasizing Complementary Samples for Non-literal Cross-modal Retrieval0
Comprehending and Ordering Semantics for Image CaptioningCode2
HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval0
Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis0
Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image RetrievalCode2
Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local InformationCode1
Uncertainty-based Cross-Modal Retrieval with Probabilistic Representations0
Transformer Decoders with MultiModal Regularization for Cross-Modal Food RetrievalCode1
Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing0
Learning Similarity Preserving Binary Codes for Recommender Systems0
COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval0
ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval0
Learning Program Representations for Food Images and Cooking Recipes0
Cross-Media Scientific Research Achievements Retrieval Based on Deep Language Model0
On Metric Learning for Audio-Text Cross-Modal RetrievalCode1
LILE: Look In-Depth before Looking Elsewhere -- A Dual Attention Network using Transformers for Cross-Modal Information Retrieval in Histopathology Archives0
Vision-Language Pre-Training with Triple Contrastive LearningCode2
Efficient Cross-Modal Retrieval via Deep Binary Hashing and QuantizationCode0
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
Discriminative Supervised Subspace Learning for Cross-modal Retrieval0
Deep Unsupervised Contrastive Hashing for Large-Scale Cross-Modal Text-Image Retrieval in Remote Sensing0
A Text-Image Pair Is not Enough: Language-Vision Relation Inference with Auxiliary Modality Translation0
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal RetrievalCode1
Show:102550
← PrevPage 6 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified