SOTAVerified

Recurrent Positional Embedding for Neural Machine Translation

2019-11-01IJCNLP 2019Unverified0· sign in to hype

Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In the Transformer network architecture, positional embeddings are used to encode order dependencies into the input representation. However, this input representation only involves static order dependencies based on discrete numerical information, that is, are independent of word content. To address this issue, this work proposes a recurrent positional embedding approach based on word vector. In this approach, these recurrent positional embeddings are learned by a recurrent neural network, encoding word content-based order dependencies into the input representation. They are then integrated into the existing multi-head self-attention model as independent heads or part of each head. The experimental results revealed that the proposed approach improved translation performance over that of the state-of-the-art Transformer baseline in WMT'14 English-to-German and NIST Chinese-to-English translation tasks.

Tasks

Reproductions