SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 401450 of 522 papers

TitleStatusHype
Learning Joint Embedding for Cross-Modal Retrieval0
Test-time Adaptation for Cross-modal Retrieval with Query Shift0
Text-Adaptive Multiple Visual Prototype Matching for Video-Text Retrieval0
The 1st EReL@MIR Workshop on Efficient Representation Learning for Multimodal Information Retrieval0
TimbreCLIP: Connecting Timbre to Text and Images0
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models0
Towards Deep Modeling of Music Semantics using EEG Regularizers0
Towards Efficient Cross-Modal Visual Textual Retrieval using Transformer-Encoder Deep Features0
Tri-Modal Motion Retrieval by Learning a Joint Embedding Space0
Triplet-Based Deep Hashing Network for Cross-Modal Retrieval0
TSVC:Tripartite Learning with Semantic Variation Consistency for Robust Image-Text Retrieval0
Two-Stage Triplet Loss Training with Curriculum Augmentation for Audio-Visual Retrieval0
Uncertainty-based Cross-Modal Retrieval with Probabilistic Representations0
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models0
Uni3DL: Unified Model for 3D and Language Understanding0
Unsupervised Multi-modal Hashing for Cross-modal retrieval0
Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing0
Unsupervised Deep Cross-modality Spectral Hashing0
Unsupervised Generative Adversarial Cross-modal Hashing0
Using Multiple Instance Learning to Build Multimodal Representations0
Variational Autoencoder with CCA for Audio-Visual Cross-Modal Retrieval0
Video and Audio are Images: A Cross-Modal Mixer for Original Data on Video-Audio Retrieval0
ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval0
VLDeformer: Vision-Language Decomposed Transformer for Fast Cross-Modal Retrieval0
Wasserstein Coupled Graph Learning for Cross-Modal Retrieval0
Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching0
Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval0
Webly Supervised Joint Embedding for Cross-Modal lmage-Text Retrieval0
What If We Recaption Billions of Web Images with LLaMA-3?0
WikiMuTe: A web-sourced dataset of semantic descriptions for music audio0
Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding0
X2CT-CLIP: Enable Multi-Abnormality Detection in Computed Tomography from Chest Radiography via Tri-Modal Contrastive Learning0
X-TRA: Improving Chest X-ray Tasks with Cross-Modal Retrieval Augmentation0
Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences0
Zero-Shot Interactive Text-to-Image Retrieval via Diffusion-Augmented Representations0
Show, Translate and TellCode0
Dynamic Adapter with Semantics Disentangling for Cross-lingual Cross-modal RetrievalCode0
Zero-shot sketch-based remote sensing image retrieval based on multi-level and attention-guided tokenizationCode0
Dual-Path Convolutional Image-Text Embeddings with Instance LossCode0
DocMMIR: A Framework for Document Multi-modal Information RetrievalCode0
Dissecting Deep Metric Learning Losses for Image-Text RetrievalCode0
DIME: An Online Tool for the Visual Comparison of Cross-Modal Retrieval ModelsCode0
Deep Visual-Semantic Alignments for Generating Image DescriptionsCode0
NeighborRetr: Balancing Hub Centrality in Cross-Modal RetrievalCode0
SMOTExT: SMOTE meets Large Language ModelsCode0
NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal EmbeddingsCode0
Unified Lexical Representation for Interpretable Visual-Language AlignmentCode0
Contrastive Transformer Learning with Proximity Data Generation for Text-Based Person SearchCode0
Stacked Capsule AutoencodersCode0
Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal RetrievalCode0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified