LightEA: A Scalable, Robust, and Interpretable Entity Alignment Framework via Three-view Label Propagation
Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/THU-KEG/Entity_Alignment_PapersOfficialIn papertf★ 574
- github.com/maoxinn/lighteaOfficialIn papertf★ 9
Abstract
Entity Alignment (EA) aims to find equivalent entity pairs between KGs, which is the core step of bridging and integrating multi-source KGs. In this paper, we argue that existing GNN-based EA methods inherit the inborn defects from their neural network lineage: weak scalability and poor interpretability. Inspired by recent studies, we reinvent the Label Propagation algorithm to effectively run on KGs and propose a non-neural EA framework -- LightEA, consisting of three efficient components: (i) Random Orthogonal Label Generation, (ii) Three-view Label Propagation, and (iii) Sparse Sinkhorn Iteration. According to the extensive experiments on public datasets, LightEA has impressive scalability, robustness, and interpretability. With a mere tenth of time consumption, LightEA achieves comparable results to state-of-the-art methods across all datasets and even surpasses them on many.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| DBP1M DE-EN | LightEA-I | Hit@1 | 0.29 | — | Unverified |
| DBP1M DE-EN | LightEA-B | Hit@1 | 0.26 | — | Unverified |
| DBP1M FR-EN | LightEA | Hit@1 | 0.29 | — | Unverified |