SOTAVerified

Concept Equalization to Guide Correct Training of Neural Machine Translation

2017-11-01IJCNLP 2017Unverified0· sign in to hype

Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences. This approach is limited to find the best word composition and requires help of explicit methods as beam search. To help learning correct compositional mechanisms in NMTs, we propose concept equalization using direct mapping distributed representations of source and target sentences. In a translation experiment from English to French, the concept equalization significantly improved translation quality by 3.00 BLEU points compared to a state-of-the-art NMT model.

Tasks

Reproductions