SOTAVerified

Attention Strategies for Multi-Source Sequence-to-Sequence Learning

2017-07-01ACL 2017Unverified0· sign in to hype

Jind{\v{r}}ich Libovick{\'y}, Jind{\v{r}}ich Helcl

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.

Tasks

Reproductions