SOTAVerified

Cross-Lingual Representation Alignment Through Contrastive Image-Caption Tuning

2025-05-19Code Available0· sign in to hype

Nathaniel Krasner, Nicholas Lanuzo, Antonios Anastasopoulos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multilingual alignment of sentence representations has mostly required bitexts to bridge the gap between languages. We investigate whether visual information can bridge this gap instead. Image caption datasets are very easy to create without requiring multilingual expertise, so this offers a more efficient alternative for low-resource languages. We find that multilingual image-caption alignment can implicitly align the text representations between languages, languages unseen by the encoder in pretraining can be incorporated into this alignment post-hoc, and these aligned representations are usable for cross-lingual Natural Language Understanding (NLU) and bitext retrieval.

Tasks

Reproductions