Surprisingly Easy Hard-Attention for Sequence to Sequence Learning
2018-10-01EMNLP 2018Code Available0· sign in to hype
Shiv Shankar, Siddhant Garg, Sunita Sarawagi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/sid7954/beam-joint-attentionOfficialIn papertf★ 0
Abstract
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms.