SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 351400 of 522 papers

TitleStatusHype
Learning Program Representations for Food Images and Cooking Recipes0
Cross-Media Scientific Research Achievements Retrieval Based on Deep Language Model0
LILE: Look In-Depth before Looking Elsewhere -- A Dual Attention Network using Transformers for Cross-Modal Information Retrieval in Histopathology Archives0
Efficient Cross-Modal Retrieval via Deep Binary Hashing and QuantizationCode0
Discriminative Supervised Subspace Learning for Cross-modal Retrieval0
Deep Unsupervised Contrastive Hashing for Large-Scale Cross-Modal Text-Image Retrieval in Remote Sensing0
A Text-Image Pair Is not Enough: Language-Vision Relation Inference with Auxiliary Modality Translation0
EI-CLIP: Entity-Aware Interventional Contrastive Learning for E-Commerce Cross-Modal Retrieval0
CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising0
Multi-Modal Mutual Information Maximization: A Novel Approach for Unsupervised Deep Cross-Modal Hashing0
Variational Autoencoder with CCA for Audio-Visual Cross-Modal Retrieval0
SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval0
MURAL: Multimodal, Multitask Representations Across Languages0
Inflate and Shrink:Enriching and Reducing Interactions for Fast Text-Image Retrieval0
Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature EngineeringCode0
VLDeformer: Vision-Language Decomposed Transformer for Fast Cross-Modal Retrieval0
Learning Structural Representations for Recipe Generation and Food Retrieval0
Self-Supervised Modality-Invariant and Modality-Specific Feature Learning for 3D Objects0
Calibrating Probabilistic Embeddings for Cross-Modal Retrieval0
EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling0
MURAL: Multimodal, Multitask Retrieval Across Languages0
Learning Joint Embedding with Modality Alignments for Cross-Modal Retrieval of Recipes and Food Images0
Learning TFIDF Enhanced Joint Embedding for Recipe-Image Cross-Modal Retrieval ServiceCode0
Evaluation of Audio-Visual Alignments in Visually Grounded Speech ModelsCode0
OPT: Omni-Perception Pre-Trainer for Cross-Modal Understanding and GenerationCode0
Graph Pattern Loss based Diversified Attention Network for Cross-Modal Retrieval0
Cross-Modal Center Loss for 3D Cross-Modal Retrieval0
Multi-Modal Relational Graph for Cross-Modal Video Moment Retrieval0
Cross-Modal Discrete Representation Learning0
Exploring modality-agnostic representations for music classificationCode0
Cross-lingual Cross-modal Pretraining for Multimodal Retrieval0
Towards Efficient Cross-Modal Visual Textual Retrieval using Transformer-Encoder Deep Features0
More Than Just Attention: Improving Cross-Modal Attentions with Contrastive Constraints for Image-Text Matching0
Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching0
FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal RetrievalCode0
Cross-Modal and Multimodal Data Analysis Based on Functional Mapping of Spectral Descriptors and Manifold Regularization0
T-EMDE: Sketching-based global similarity for cross-modal retrieval0
Multimodal Contrastive Training for Visual Representation Learning0
Cross-Modal Retrieval Augmentation for Multi-Modal Classification0
Continual learning in cross-modal retrieval0
Integrating Information Theory and Adversarial Learning for Cross-modal Retrieval0
Discriminative Semantic Transitive Consistency for Cross-Modal Learning0
Cross-modal Image Retrieval with Deep Mutual Information Maximization0
CHEF: Cross-modal Hierarchical Embeddings for Food Domain RetrievalCode0
COOKIE: Contrastive Cross-Modal Knowledge Sharing Pre-Training for Vision-Language RepresentationCode0
Adversarial Attack on Deep Cross-Modal Hamming Retrieval0
Wasserstein Coupled Graph Learning for Cross-Modal Retrieval0
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive LearningCode0
Cross-Modal Retrieval and Synthesis (X-MRS): Closing the Modality Gap in Shared Representation Learning0
Learning Disentangled Latent Factors from Paired Data in Cross-Modal Retrieval: An Implicit Identifiable VAE Approach0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified