SOTAVerified

On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning

2019-08-21LREC 2020Unverified0· sign in to hype

Yerai Doval, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.

Tasks

Reproductions