SOTAVerified

Bridging the domain gap in cross-lingual document classification

2019-09-16Code Available0· sign in to hype

Guokun Lai, Barlas Oguz, Yiming Yang, Veselin Stoyanov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. Recent developments in cross-lingual understanding (XLU) has made progress in this area, trying to bridge the language barrier using language universal representations. However, even if the language problem was resolved, models trained in one language would not transfer to another language perfectly due to the natural domain drift across languages and cultures. We consider the setting of semi-supervised cross-lingual understanding, where labeled data is available in a source language (English), but only unlabeled data is available in the target language. We combine state-of-the-art cross-lingual methods with recently proposed methods for weakly supervised learning such as unsupervised pre-training and unsupervised data augmentation to simultaneously close both the language gap and the domain gap in XLU. We show that addressing the domain gap is crucial. We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MLDoc Zero-Shot English-to-ChineseXLMft UDAAccuracy93.32Unverified
MLDoc Zero-Shot English-to-FrenchXLMft UDAAccuracy96.05Unverified
MLDoc Zero-Shot English-to-GermanXLMft UDAAccuracy96.95Unverified
MLDoc Zero-Shot English-to-RussianXLMft UDAAccuracy89.7Unverified
MLDoc Zero-Shot English-to-SpanishXLMft UDAAccuracy96.8Unverified

Reproductions