SOTAVerified

Visualizing and Understanding Neural Machine Translation

2017-07-01ACL 2017Unverified0· sign in to hype

Yanzhuo Ding, Yang Liu, Huanbo Luan, Maosong Sun

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoder-decoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors.

Tasks

Reproductions