SOTAVerified

Glancing Transformer for Non-Autoregressive Neural Machine Translation

2020-08-18ACL 2021Code Available1· sign in to hype

Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Wei-Nan Zhang, Yong Yu, Lei LI

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing Language Model (GLM), a method to learn word interdependency for single-pass parallel generation models. With GLM, we develop Glancing Transformer (GLAT) for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup. Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WMT2014 English-GermanGLATBLEU score25.21Unverified

Reproductions