SOTAVerified

Pushing the Limits of AMR Parsing with Self-Learning

2020-10-20Findings of the Association for Computational LinguisticsCode Available0· sign in to hype

Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LDC2014T12stack-Transformer + self-learning (IBM)F1 Full78.2Unverified
LDC2017T10stack-Transformer + self-learning (IBM)Smatch81.3Unverified

Reproductions