SOTAVerified

Very Deep Transformers for Neural Machine Translation

2020-08-18Code Available1· sign in to hype

Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We explore the application of very deep Transformer models for Neural Machine Translation (NMT). Using a simple yet effective initialization technique that stabilizes training, we show that it is feasible to build standard Transformer-based models with up to 60 encoder layers and 12 decoder layers. These deep models outperform their baseline 6-layer counterparts by as much as 2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14 English-French (43.8 BLEU and 46.4 BLEU with back-translation) and WMT14 English-German (30.1 BLEU).The code and trained models will be publicly available at: https://github.com/namisan/exdeep-nmt.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WMT2014 English-FrenchTransformer+BT (ADMIN init)BLEU score46.4Unverified
WMT2014 English-FrenchTransformer (ADMIN init)BLEU score43.8Unverified
WMT2014 English-GermanTransformer (ADMIN init)BLEU score30.1Unverified

Reproductions