SOTAVerified

Revisiting Higher-Order Dependency Parsers

2020-07-01ACL 2020Unverified0· sign in to hype

Erick Fonseca, Andr{\'e} F. T. Martins

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers. This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and grandparents in a tree. We tested this hypothesis and found that neural parsers may benefit from higher-order features, even when employing a powerful pre-trained encoder, such as BERT. While the gains of higher-order features are small in the presence of a powerful encoder, they are consistent for long-range dependencies and long sentences. In particular, higher-order models are more accurate on full sentence parses and on the exact match of modifier lists, indicating that they deal better with larger, more complex structures.

Tasks

Reproductions