SOTAVerified

Supervised Attention for Sequence-to-Sequence Constituency Parsing

2017-11-01IJCNLP 2017Unverified0· sign in to hype

Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, Masaaki Nagata

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The sequence-to-sequence (Seq2Seq) model has been successfully applied to machine translation (MT). Recently, MT performances were improved by incorporating supervised attention into the model. In this paper, we introduce supervised attention to constituency parsing that can be regarded as another translation task. Evaluation results on the PTB corpus showed that the bracketing F-measure was improved by supervised attention.

Tasks

Reproductions