SOTAVerified

AMR Parsing

Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.

Papers

Showing 110 of 117 papers

TitleStatusHype
Reassessing Graph Linearization for Sequence-to-sequence AMR Parsing: On the Advantages and Limitations of Triple-Based EncodingCode0
Should Cross-Lingual AMR Parsing go Meta? An Empirical Assessment of Meta-Learning and Joint Learning AMR ParsingCode0
Adapting Abstract Meaning Representation Parsing to the Clinical Narrative -- the SPRING THYME parser0
AMR Parsing is Far from Solved: GrAPES, the Granular AMR Parsing Evaluation SuiteCode0
AMR Parsing with Causal Hierarchical Attention and PointersCode0
Guiding AMR Parsing with Reverse Graph LinearizationCode0
Incorporating Graph Information in Transformer-based AMR ParsingCode0
AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR ParsingCode0
Slide, Constrain, Parse, Repeat: Synchronous SlidingWindows for Document AMR Parsing0
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models0
Show:102550
← PrevPage 1 of 12Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Graphene Smatch (MBSE paper) (IBM)Smatch85.4Unverified
2Graphene Smatch (IBM)Smatch84.87Unverified
3LeakDistillSmatch84.6Unverified
4Graphene Support (IBM)Smatch84.41Unverified
5StructBART + MBSE (IBM)Smatch84.3Unverified
6AMRBART largeSmatch84.2Unverified
7ATP-SRLSmatch84Unverified
8BiBLSmatch83.9Unverified
9BiBL+SilverSmatch83.5Unverified
10LeakDistill (base)Smatch83.5Unverified