SOTAVerified

Are Negative Samples Necessary in Entity Alignment? An Approach with High Performance, Scalability and Robustness

2021-08-11Code Available0· sign in to hype

Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Entity alignment (EA) aims to find the equivalent entities in different KGs, which is a crucial step in integrating multiple KGs. However, most existing EA methods have poor scalability and are unable to cope with large-scale datasets. We summarize three issues leading to such high time-space complexity in existing EA methods: (1) Inefficient graph encoders, (2) Dilemma of negative sampling, and (3) "Catastrophic forgetting" in semi-supervised learning. To address these challenges, we propose a novel EA method with three new components to enable high Performance, high Scalability, and high Robustness (PSR): (1) Simplified graph encoder with relational graph sampling, (2) Symmetric negative-free alignment loss, and (3) Incremental semi-supervised learning. Furthermore, we conduct detailed experiments on several public datasets to examine the effectiveness and efficiency of our proposed method. The experimental results show that PSR not only surpasses the previous SOTA in performance but also has impressive scalability and robustness.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
dbp15k fr-enPSRHits@10.96Unverified
dbp15k ja-enPSRHits@10.91Unverified
DBP15k zh-enPSRHits@10.88Unverified

Reproductions