SOTAVerified

Lightweight Cross-Modal Representation Learning

2024-03-07Code Available0· sign in to hype

Bilal Faye, Hanane Azzag, Mustapha Lebbah, Djamel Bouchaffra

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Low-cost cross-modal representation learning is crucial for deriving semantic representations across diverse modalities such as text, audio, images, and video. Traditional approaches typically depend on large specialized models trained from scratch, requiring extensive datasets and resulting in high resource and time costs. To overcome these challenges, we introduce a novel approach named Lightweight Cross-Modal Representation Learning (LightCRL). This method uses a single neural network titled Deep Fusion Encoder (DFE), which projects data from multiple modalities into a shared latent representation space. This reduces the overall parameter count while still delivering robust performance comparable to more complex systems.

Tasks

Reproductions