Syntactic Multi-view Learning for Open Information Extraction
Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/daviddongkc/smile_oieOfficialIn papernone★ 7
Abstract
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| LSOIE-wiki | SMiLe-OIE | F1 | 51.73 | — | Unverified |
| LSOIE-wiki | BERT + Dep-GCN - Const-GCN | F1 | 50.21 | — | Unverified |
| LSOIE-wiki | BERT + Dep-GCN [?] Const-GCN | F1 | 49.89 | — | Unverified |
| LSOIE-wiki | BERT + Const-GCN | F1 | 49.71 | — | Unverified |
| LSOIE-wiki | IMoJIE Kolluru et al. (2020) | F1 | 49.24 | — | Unverified |
| LSOIE-wiki | BERT + Dep-GCN | F1 | 48.71 | — | Unverified |
| LSOIE-wiki | BERT Solawetz and Larson (2021) | F1 | 47.54 | — | Unverified |
| LSOIE-wiki | CIGL-OIE + IGL-CA Kolluru et al. (2020) | F1 | 44.75 | — | Unverified |
| LSOIE-wiki | GloVe + bi-LSTM + CRF | F1 | 44.48 | — | Unverified |
| LSOIE-wiki | GloVe + bi-LSTM Stanovsky et al. (2018) | F1 | 43.9 | — | Unverified |
| LSOIE-wiki | CopyAttention Cui et al. (2018) | F1 | 39.52 | — | Unverified |