SOTAVerified

Syntactic Multi-view Learning for Open Information Extraction

2022-12-05Code Available0· sign in to hype

Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LSOIE-wikiSMiLe-OIEF151.73Unverified
LSOIE-wikiBERT + Dep-GCN - Const-GCNF150.21Unverified
LSOIE-wikiBERT + Dep-GCN [?] Const-GCNF149.89Unverified
LSOIE-wikiBERT + Const-GCNF149.71Unverified
LSOIE-wikiIMoJIE Kolluru et al. (2020)F149.24Unverified
LSOIE-wikiBERT + Dep-GCNF148.71Unverified
LSOIE-wikiBERT Solawetz and Larson (2021)F147.54Unverified
LSOIE-wikiCIGL-OIE + IGL-CA Kolluru et al. (2020)F144.75Unverified
LSOIE-wikiGloVe + bi-LSTM + CRFF144.48Unverified
LSOIE-wikiGloVe + bi-LSTM Stanovsky et al. (2018)F143.9Unverified
LSOIE-wikiCopyAttention Cui et al. (2018)F139.52Unverified

Reproductions