SOTAVerified

Faster Transformer Decoding: N-gram Masked Self-Attention

2020-01-14Unverified0· sign in to hype

Ciprian Chelba, Mia Chen, Ankur Bapna, Noam Shazeer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence S=s_1, , s_S, we propose truncating the target-side window used for computing self-attention by making an N-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the N-gram masked self-attention model loses very little in BLEU score for N values in the range 4, , 8, depending on the task.

Tasks

Reproductions