PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion
Jianhao Shen, Chenguang Wang, Ye Yuan, Jiawei Han, Heng Ji, Koushik Sen, Ming Zhang, Dawn Song
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/yuanyehome/paltOfficialIn paperpytorch★ 13
Abstract
This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a "fill-in-the-blank" task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters. The code and datasets are available at https://github.com/yuanyehome/PALT.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| FB15k-237 | PALT | Hits@10 | 0.44 | — | Unverified |
| UMLS | PALT | Hits@10 | 0.99 | — | Unverified |
| WN18RR | PALT | Hits@10 | 0.69 | — | Unverified |