SOTAVerified

xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation

2025-03-12Unverified0· sign in to hype

Elio Musacchio, Lucia Siciliani, Pierpaolo Basile, Giovanni Semeraro

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In the current literature, most embedding models are based on the encoder-only transformer architecture to extract a dense and meaningful representation of the given input, which can be a text, an image, and more. With the recent advances in language modeling thanks to the introduction of Large Language Models, the possibility of extracting embeddings from these large and extensively trained models has been explored. However, current studies focus on textual embeddings in English, which is also the main language on which these models have been trained. Furthermore, there are very few models that consider multimodal and multilingual input. In light of this, we propose an adaptation methodology for Large Vision-Language Models trained on English language data to improve their performance in extracting multilingual and multimodal embeddings. Finally, we design and introduce a benchmark to evaluate the effectiveness of multilingual and multimodal embedding models.

Tasks

Reproductions