SOTAVerified

Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models

2018-11-01WS 2018Unverified0· sign in to hype

Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Developing a method for understanding the inner workings of black-box neural methods is an important research endeavor. Conventionally, many studies have used an attention matrix to interpret how Encoder-Decoder-based models translate a given source sentence to the corresponding target sentence. However, recent studies have empirically revealed that an attention matrix is not optimal for token-wise translation analyses. We propose a method that explicitly models the token-wise alignment between the source and target sequences to provide a better analysis. Experiments show that our method can acquire token-wise alignments that are superior to those of an attention mechanism.

Tasks

Reproductions