Constituency Parsing
Constituency parsing aims to extract a constituency-based parse tree from a sentence that represents its syntactic structure according to a phrase structure grammar.
Example:
Sentence (S)
|
+-------------+------------+
| |
Noun (N) Verb Phrase (VP)
| |
John +-------+--------+
| |
Verb (V) Noun (N)
| |
sees Bill
Recent approaches convert the parse tree into a sequence following a depth-first traversal in order to be able to apply sequence-to-sequence models to it. The linearized version of the above parse tree looks as follows: (S (N) (VP V N)).
Papers
Showing 1–10 of 204 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Hashing + XLNet | F1 score | 96.43 | — | Unverified |
| 2 | SAPar + XLNet | F1 score | 96.4 | — | Unverified |
| 3 | Label Attention Layer + HPSG + XLNet | F1 score | 96.38 | — | Unverified |
| 4 | Attach-Juxtapose Parser + XLNet | F1 score | 96.34 | — | Unverified |
| 5 | Head-Driven Phrase Structure Grammar Parsing (Joint) + XLNet | F1 score | 96.33 | — | Unverified |
| 6 | CRF Parser + RoBERTa | F1 score | 96.32 | — | Unverified |
| 7 | Hashing + Bert | F1 score | 96.03 | — | Unverified |
| 8 | N-ary semi-markov + BERT-large | F1 score | 95.92 | — | Unverified |
| 9 | NFC + BERT-large | F1 score | 95.92 | — | Unverified |
| 10 | Head-Driven Phrase Structure Grammar Parsing (Joint) + BERT | F1 score | 95.84 | — | Unverified |