SOTAVerified

Cross-Modal Information Retrieval

Cross-Modal Information Retrieval (CMIR) is the task of finding relevant items across different modalities. For example, given an image, find a text or vice versa. The main challenge in CMIR is known as the heterogeneity gap: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of CMIR methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured.

Source: Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study

Papers

Showing 1116 of 16 papers

TitleStatusHype
ZSCRGAN: A GAN-based Expectation Maximization Model for Zero-Shot Retrieval of Images from Textual DescriptionsCode0
Cross-modal representation alignment of molecular structure and perturbation-induced transcriptional profilesCode0
CMIR-NET : A Deep Learning Based Model For Cross-Modal Retrieval In Remote SensingCode0
Scene Graph Reasoning with Prior Visual Relationship for Visual Question Answering0
Modeling Text with Graph Convolutional Network for Cross-Modal Information Retrieval0
Picture It In Your Mind: Generating High Level Visual Representations From Textual DescriptionsCode0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.