SOTAVerified

Speeding Up Neural Machine Translation Decoding by Cube Pruning

2018-09-09EMNLP 2018Unverified0· sign in to hype

Wen Zhang, Liang Huang, Yang Feng, Lei Shen, Qun Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Although neural machine translation has achieved promising results, it suffers from slow translation speed. The direct consequence is that a trade-off has to be made between translation quality and speed, thus its performance can not come into full play. We apply cube pruning, a popular technique to speed up dynamic programming, into neural machine translation to speed up the translation. To construct the equivalence class, similar target hidden states are combined, leading to less RNN expansion operations on the target side and less \ operations over the large target vocabulary. The experiments show that, at the same or even better translation quality, our method can translate faster compared with naive beam search by \3.3\ on GPUs and \3.5\ on CPUs.

Tasks

Reproductions