SOTAVerified

Cross-lingual Language Model Pretraining

2019-01-22NeurIPS 2019Code Available0· sign in to hype

Guillaume Lample, Alexis Conneau

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WMT2016 Romanian-EnglishMLM pretrainingBLEU score35.3Unverified

Reproductions