SOTAVerified

Speeding Up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary

2017-07-01ACL 2017Unverified0· sign in to hype

Xing Shi, Kevin Knight

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We speed up Neural Machine Translation (NMT) decoding by shrinking run-time target vocabulary. We experiment with two shrinking approaches: Locality Sensitive Hashing (LSH) and word alignments. Using the latter method, we get a 2x overall speed-up over a highly-optimized GPU implementation, without hurting BLEU. On certain low-resource language pairs, the same methods improve BLEU by 0.5 points. We also report a negative result for LSH on GPUs, due to relatively large overhead, though it was successful on CPUs. Compared with Locality Sensitive Hashing (LSH), decoding with word alignments is GPU-friendly, orthogonal to existing speedup methods and more robust across language pairs.

Tasks

Reproductions