mT5: A massively multilingual pre-trained text-to-text transformer
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/google-research/multilingual-t5Officialtf★ 1,297
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/google-research/byt5tf★ 539
- github.com/MorenoLaQuatra/bart-itpytorch★ 16
- github.com/manshri/tesumpytorch★ 1
- github.com/pwc-1/Paper-5/tree/main/mt5mindspore★ 0
- github.com/KoshiroSato/Simple_Transformers_mT5_finetuningnone★ 0
- github.com/KoshiroSato/Flask_NLP_Appnone★ 0
Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| PARus | MT5 Large | Accuracy | 0.5 | — | Unverified |
| RuCoS | MT5 Large | Average F1 | 0.57 | — | Unverified |
| RWSD | MT5 Large | Accuracy | 0.67 | — | Unverified |