SOTAVerified

Cross-Modal Retrieval

Cross-Modal Retrieval (CMR) is a task of retrieving items across different modalities, such as image, text, video, and audio. The core challenge of CMR is the heterogeneity gap, which arises because data from different modalities have distinct representations, making direct comparison difficult. To address this, most CMR methods focus on learning a shared latent embedding space. In this space, concepts from different modalities are projected, allowing their similarity to be measured using a distance metric.

Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 201250 of 522 papers

TitleStatusHype
Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal0
Category-Oriented Representation Learning for Image to Multi-Modal Retrieval0
Deep Lifelong Cross-modal Hashing0
Sample-Specific Debiasing for Better Image-Text Models0
Rethinking Benchmarks for Cross-modal Image-text RetrievalCode1
RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching ModelsCode0
Image-text Retrieval via Preserving Main Semantics of VisionCode1
VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and DatasetCode2
CoVLR: Coordinating Cross-Modal Consistency and Intra-Modal Structure for Vision-Language Retrieval0
Noisy Correspondence Learning with Meta Similarity CorrectionCode1
Exposing and Mitigating Spurious Correlations for Cross-Modal RetrievalCode0
AToMiC: An Image/Text Retrieval Test Collection to Support Multimedia Content CreationCode3
Hindi as a Second Language: Improving Visually Grounded Speech with Semantically Similar Samples0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Plug-and-Play Regulators for Image-Text MatchingCode1
MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal RepresentationsCode0
Single-branch Network for Multimodal TrainingCode1
Adversarial Modality Alignment Network for Cross-Modal Molecule RetrievalCode0
Cross-modal Retrieval with Improved Graph Convolution0
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion TasksCode1
Data leakage in cross-modal retrieval training: A case study0
Cross-Modal Retrieval with Partially Mismatched PairsCode1
X-TRA: Improving Chest X-ray Tasks with Cross-Modal Retrieval Augmentation0
VITR: Augmenting Vision Transformers with Relation-Focused Learning for Cross-Modal Information Retrieval0
Distribution Aligned Feature Clustering for Zero-Shot Sketch-Based Image Retrieval0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility StudyCode0
Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images0
NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal EmbeddingsCode0
Learning Concordant Attention via Target-aware Alignment for Visible-Infrared Person Re-identification0
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks0
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
RONO: Robust Discriminative Learning With Noisy Labels for 2D-3D Cross-Modal RetrievalCode1
BagFormer: Better Cross-Modal Retrieval via bag-wise interaction0
Position-guided Text Prompt for Vision-Language Pre-trainingCode1
Retrieval-based Disentangled Representation Learning with Natural Language Supervision0
Scale-Semantic Joint Decoupling Network for Image-text Retrieval in Remote Sensing0
Using Multiple Instance Learning to Build Multimodal Representations0
Vision and Structured-Language Pretraining for Cross-Modal Food RetrievalCode1
A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal RetrievalCode1
Semantic-Conditional Diffusion Networks for Image CaptioningCode2
Normalized Contrastive Learning for Text-Video RetrievalCode1
Improving Cross-Modal Retrieval with Set of Diverse EmbeddingsCode1
VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal RetrievalCode1
X^2-VLM: All-In-One Pre-trained Model For Vision-Language TasksCode2
TimbreCLIP: Connecting Timbre to Text and Images0
Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent AttentionCode1
AltCLIP: Altering the Language Encoder in CLIP for Extended Language CapabilitiesCode4
Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval0
3D Shape Knowledge Graph for Cross-domain 3D Shape Retrieval0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MaMMUT (ours)Image-to-text R@170.7Unverified
2VASTText-to-image R@168Unverified
3X2-VLM (large)Text-to-image R@167.7Unverified
4BEiT-3Text-to-image R@167.2Unverified
5XFM (base)Text-to-image R@167Unverified
6X2-VLM (base)Text-to-image R@166.2Unverified
7PTP-BLIP (14M)Text-to-image R@164.9Unverified
8OmniVL (14M)Text-to-image R@164.8Unverified
9VSE-GradientText-to-image R@163.6Unverified
10X-VLM (base)Text-to-image R@163.4Unverified
#ModelMetricClaimedVerifiedStatus
1X2-VLM (large)Image-to-text R@198.8Unverified
2X2-VLM (base)Image-to-text R@198.5Unverified
3BEiT-3Image-to-text R@198Unverified
4OmniVL (14M)Image-to-text R@197.3Unverified
5ERNIE-ViL 2.0Image-to-text R@197.2Unverified
6Aurora (ours, r=128)Image-to-text R@197.2Unverified
7X-VLM (base)Image-to-text R@197.1Unverified
8VSE-GradientImage-to-text R@197Unverified
9ALIGNImage-to-text R@195.3Unverified
10VASTText-to-image R@191Unverified
#ModelMetricClaimedVerifiedStatus
1VLPCook (R1M+)Image-to-text R@174.9Unverified
2VLPCookImage-to-text R@173.6Unverified
3T-Food (CLIP)Image-to-text R@172.3Unverified
4T-FoodImage-to-text R@168.2Unverified
5X-MRSImage-to-text R@164Unverified
6H-TImage-to-text R@160Unverified
7SCANImage-to-text R@154Unverified
8ACMEImage-to-text R@151.8Unverified
9VLPCookImage-to-text R@145.2Unverified
10AdaMineImage-to-text R@139.8Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Mean Recall38.95Unverified
2GeoRSCLIP-FTMean Recall38.87Unverified
3GLISAMean Recall37.69Unverified
4RemoteCLIPMean Recall36.35Unverified
5PE-RSITR (MRS-Adapter)Mean Recall31.12Unverified
6PIRMean Recall24.46Unverified
7DOVEMean Recall22.72Unverified
8SWANMean Recall20.61Unverified
9GaLRMean Recall18.96Unverified
10AMFMNMean Recall15.53Unverified
#ModelMetricClaimedVerifiedStatus
1HarMA (w/ GeoRSCLIP)Image-to-text R@132.74Unverified
2GeoRSCLIP-FTImage-to-text R@132.3Unverified
3GLISAImage-to-text R@132.08Unverified
4RemoteCLIPImage-to-text R@128.76Unverified
5PE-RSITR (MRS-Adapter)Image-to-text R@123.67Unverified
6PIRImage-to-text R@118.14Unverified
7DOVEImage-to-text R@116.81Unverified
8GaLRImage-to-text R@114.82Unverified
9SWANImage-to-text R@113.35Unverified
10AMFMNImage-to-text R@110.63Unverified
#ModelMetricClaimedVerifiedStatus
1CLASS (ORMA)Hits@167.4Unverified
2ORMAHits@166.5Unverified
3Song et al.Hits@156.5Unverified
4CLASS (AMAN)Hits@151.1Unverified
5DSOKRHits@151Unverified
6AMANHits@149.4Unverified
7All-EnsembleHits@134.4Unverified
8MLP1Hits@122.4Unverified
9GCN2Hits@122.3Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@181.9Unverified
2Dual-path CNNImage-to-text R@141.2Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-18Median Rank565Unverified
2GeoCLAPMedian Rank159Unverified
#ModelMetricClaimedVerifiedStatus
1Dual PathText-to-image Medr2Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegImage-to-text R@156.2Unverified
#ModelMetricClaimedVerifiedStatus
13SHNetImage-to-text R@185.8Unverified
#ModelMetricClaimedVerifiedStatus
1NAPRegText-to-image R@143Unverified