SOTAVerified

Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models

2020-11-01Findings of the Association for Computational LinguisticsUnverified0· sign in to hype

Zhiyuan Zhang, Xiaoqian Liu, Yi Zhang, Qi Su, Xu sun, Bin He

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Conventional knowledge graph embedding (KGE) often suffers from limited knowledge representation, leading to performance degradation especially on the low-resource problem. To remedy this, we propose to enrich knowledge representation via pretrained language models by leveraging world knowledge from pretrained models. Specifically, we present a universal training framework named Pretrain-KGE consisting of three phases: semantic-based fine-tuning phase, knowledge extracting phase and KGE training phase. Extensive experiments show that our proposed Pretrain-KGE can improve results over KGE models, especially on solving the low-resource problem.

Tasks

Reproductions