SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 401450 of 522 papers

TitleStatusHype
Generalized Semantic Preserving Hashing for N-Label Cross-Modal Retrieval0
Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond0
GleanVec: Accelerating vector search with minimalist nonlinear dimensionality reduction0
Global–Local Information Soft-Alignment for Cross-Modal Remote-Sensing Image–Text Retrieval0
GMM-Based Comprehensive Feature Extraction and Relative Distance Preservation For Few-Shot Cross-Modal Retrieval0
Bridging Information Asymmetry in Text-video Retrieval: A Data-centric Approach0
Graph Pattern Loss based Diversified Attention Network for Cross-Modal Retrieval0
HashGAN:Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval0
Hindi as a Second Language: Improving Visually Grounded Speech with Semantically Similar Samples0
HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval0
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks0
Improved Text-Image Matching by Mitigating Visual Semantic Hubs0
Improving Factuality of 3D Brain MRI Report Generation with Paired Image-domain Retrieval and Text-domain Augmentation0
Improving Sound Source Localization with Joint Slot Attention on Image and Audio0
Incorporating Dense Knowledge Alignment into Unified Multimodal Representation Models0
Inflate and Shrink:Enriching and Reducing Interactions for Fast Text-Image Retrieval0
Information-Theoretic Hashing for Zero-Shot Cross-Modal Retrieval0
Ink Marker Segmentation in Histopathology Images Using Deep Learning0
Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal0
Integrating Information Theory and Adversarial Learning for Cross-modal Retrieval0
Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings0
Label Prediction Framework for Semi-Supervised Cross-Modal Retrieval0
Large Language Models for Captioning and Retrieving Remote Sensing Images0
Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision0
Learning Concordant Attention via Target-aware Alignment for Visible-Infrared Person Re-identification0
Learning Discriminative Hashing Codes for Cross-Modal Retrieval based on Multi-view Features0
Learning Disentangled Latent Factors from Paired Data in Cross-Modal Retrieval: An Implicit Identifiable VAE Approach0
Learning Embodied Semantics via Music and Dance Semiotic Correlations0
Learning Joint Embedding with Modality Alignments for Cross-Modal Retrieval of Recipes and Food Images0
Learning Program Representations for Food Images and Cooking Recipes0
Learning Semantic Concepts and Order for Image and Sentence Matching0
Learning Similarity Preserving Binary Codes for Recommender Systems0
Learning Soft-Attention Models for Tempo-invariant Audio-Sheet Music Retrieval0
Learning Sparse Disentangled Representations for Multimodal Exclusion Retrieval0
Learning Structural Representations for Recipe Generation and Food Retrieval0
Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays0
Learning with Noisy Correspondence0
Leveraging Chemistry Foundation Models to Facilitate Structure Focused Retrieval Augmented Generation in Multi-Agent Workflows for Catalyst and Materials Design0
Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval0
LILE: Look In-Depth before Looking Elsewhere -- A Dual Attention Network using Transformers for Cross-Modal Information Retrieval in Histopathology Archives0
Limitations in Employing Natural Language Supervision for Sensor-Based Human Activity Recognition -- And Ways to Overcome Them0
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models0
Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval0
Masked Contrastive Reconstruction for Cross-modal Medical Image-Report Retrieval0
MATE: Meet At The Embedding -- Connecting Images with Long Texts0
Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval0
Maximum Covariance Unfolding : Manifold Learning for Bimodal Data0
Maybe you are looking for CroQS: Cross-modal Query Suggestion for Text-to-Image Retrieval0
MCEN: Bridging Cross-Modal Gap between Cooking Recipes and Dish Images with Latent Variable Model0
MEDIAPI-SKEL - A 2D-Skeleton Video Database of French Sign Language With Aligned French Subtitles0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified