SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 101150 of 522 papers

TitleStatusHype
Rethinking Benchmarks for Cross-modal Image-text RetrievalCode1
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal RetrievalCode1
Learning Modal-Invariant and Temporal-Memory for Video-based Visible-Infrared Person Re-IdentificationCode1
Cross-Modal Fusion Distillation for Fine-Grained Sketch-Based Image RetrievalCode1
More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image RetrievalCode1
IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text RetrievalCode1
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
Integrating multi-label contrastive learning with dual adversarial graph neural networks for cross-modal retrievalCode1
CaLa: Complementary Association Learning for Augmenting Composed Image RetrievalCode1
Similarity Reasoning and Filtration for Image-Text MatchingCode1
Graph Structured Network for Image-Text MatchingCode1
Fusion and Orthogonal Projection for Improved Face-Voice AssociationCode1
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
Cross-Modal Retrieval for Motion and Text via DopTriple LossCode1
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
Stacked Cross Attention for Image-Text MatchingCode1
Enhancing Recipe Retrieval with Foundation Models: A Data Augmentation PerspectiveCode1
Vision and Structured-Language Pretraining for Cross-Modal Food RetrievalCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
Cross-Modal Retrieval with Partially Mismatched PairsCode1
Cross Modal Retrieval with Querybank NormalisationCode1
Text-Based Person Search with Limited DataCode1
Knowledge-enhanced Visual-Language Pretraining for Computational PathologyCode1
Fine-grained Video-Text Retrieval with Hierarchical Graph ReasoningCode1
UGNCL: Uncertainty-Guided Noisy Correspondence Learning for Efficient Cross-Modal MatchingCode1
UniVSE: Robust Visual Semantic Embeddings via Structured Semantic RepresentationsCode1
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion TasksCode1
An Empirical Study of CLIP for Text-based Person SearchCode1
FashionBERT: Text and Image Matching with Adaptive Loss for Cross-modal RetrievalCode1
Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer EncodersCode1
End-to-end Knowledge Retrieval with Multi-modal QueriesCode1
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL ModelsCode1
FedCMR: Federated Cross-Modal RetrievalCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
Deep Evidential Learning with Noisy Correspondence for Cross-Modal RetrievalCode1
Fuzzy Multimodal Learning for Trusted Cross-modal RetrievalCode1
GAIA: A Global, Multi-modal, Multi-scale Vision-Language Dataset for Remote Sensing Image AnalysisCode1
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal RetrievalCode1
Image-text Retrieval via Preserving Main Semantics of VisionCode1
COBRA: Contrastive Bi-Modal Representation AlgorithmCode1
Improving Cross-Modal Retrieval with Set of Diverse EmbeddingsCode1
Dynamic Modality Interaction Modeling for Image-Text RetrievalCode1
Florence: A New Foundation Model for Computer VisionCode1
CodeCMR: Cross-Modal Retrieval For Function-Level Binary Source Code MatchingCode1
Learning Cross-Modal Retrieval With Noisy LabelsCode1
Learning Relation Alignment for Calibrated Cross-modal RetrievalCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
Learning to Rematch Mismatched Pairs for Robust Cross-Modal RetrievalCode1
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual ConceptsCode1
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5Aurora (ours, r=128)Image-to-text R@197.2Unverified
6ERNIE-ViL 2.0Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified