Seq2RDF: An end-to-end application for deriving Triples from Natural Language Text
Yue Liu, Tongtao Zhang, Zhicheng Liang, Heng Ji, Deborah L. McGuinness
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/YueLiu/NeuralTripleTranslationOfficialIn papertf★ 0
- github.com/webnlg/webnlg-text-to-triplesnone★ 0
- github.com/abhinavnagpal/KNOWLEDGE-GRAPH-PAPERSnone★ 0
Abstract
We present an end-to-end approach that takes unstructured textual input and generates structured output compliant with a given vocabulary. Inspired by recent successes in neural machine translation, we treat the triples within a given knowledge graph as an independent graph language and propose an encoder-decoder framework with an attention mechanism that leverages knowledge graph embeddings. Our model learns the mapping from natural language text to triple representation in the form of subject-predicate-object using the selected knowledge graph vocabulary. Experiments on three different data sets show that we achieve competitive F1-Measures over the baselines using our simple yet effective approach. A demo video is included.