SOTAVerified

Incorporating Discrete Translation Lexicons into Neural Machine Translation

2016-06-07EMNLP 2016Code Available0· sign in to hype

Philip Arthur, Graham Neubig, Satoshi Nakamura

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.

Tasks

Reproductions