Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning
2019-05-31ACL 2019Unverified0· sign in to hype
Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, Miguel Ballesteros
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an in-depth study ablating each of the new components of the parser
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| LDC2017T10 | Rewarding Smatch (IBM) | Smatch | 73.4 | — | Unverified |