SOTAVerified

On the Use of Entity Embeddings from Pre-Trained Language Models for Knowledge Graph Completion

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent work has found that entity representations can be extracted from pre-trained language models to develop knowledge graph completion models that are more robust to the naturally occurring sparsity found in knowledge graphs. In this work, we explore how to best extract and incorporate those embeddings. We explore the suitability of the extracted embeddings for direct use in entity ranking and introduce both unsupervised and supervised processing methods that can lead to improved downstream performance. We then introduce supervised embedding extraction methods and demonstrate that we can extract more informative representations. We also examine the effect of language model selection and find that the choice of model can have a significant impact. We then synthesize our findings and develop a knowledge graph completion model that significantly outperforms recent neural models.

Tasks

Reproductions