MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
Julian Martin Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MLDoc Zero-Shot English-to-Chinese | MultiFiT, pseudo | Accuracy | 82.48 | — | Unverified |
| MLDoc Zero-Shot English-to-French | MultiFiT, pseudo | Accuracy | 89.42 | — | Unverified |
| MLDoc Zero-Shot English-to-German | MultiFiT, pseudo | Accuracy | 91.62 | — | Unverified |
| MLDoc Zero-Shot English-to-Italian | MultiFiT, pseudo | Accuracy | 76.02 | — | Unverified |
| MLDoc Zero-Shot English-to-Japanese | MultiFiT, pseudo | Accuracy | 69.57 | — | Unverified |
| MLDoc Zero-Shot English-to-Russian | MultiFiT, pseudo | Accuracy | 67.83 | — | Unverified |
| MLDoc Zero-Shot English-to-Spanish | MultiFiT, pseudo | Accuracy | 79.1 | — | Unverified |