SOTAVerified

Surprisingly Easy Hard-Attention for Sequence to Sequence Learning

2018-10-01EMNLP 2018Code Available0· sign in to hype

Shiv Shankar, Siddhant Garg, Sunita Sarawagi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms.

Tasks

Reproductions