SOTAVerified

Incorporating Graph Information in Transformer-based AMR Parsing

2023-06-23Code Available0· sign in to hype

Pavlo Vasylenko, Pere-Lluís Huguet Cabot, Abelardo Carlos Martínez Lorenzo, Roberto Navigli

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at http://www.github.com/sapienzanlp/LeakDistill.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LDC2017T10LeakDistillSmatch86.1Unverified
LDC2017T10LeakDistill (base)Smatch84.7Unverified
LDC2020T02LeakDistillSmatch84.6Unverified
LDC2020T02LeakDistill (base)Smatch83.5Unverified

Reproductions