EntQA: Entity Linking as Question Answering
Wenzheng Zhang, Wenyue Hua, Karl Stratos
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wenzhengzhang/entqaOfficialIn paperpytorch★ 65
- github.com/epfl-dlab/multilingual-entity-insertionpytorch★ 0
Abstract
A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base. A well-known limitation of this approach is that it requires finding mentions without knowing their entities, which is unnatural and difficult. We present a new model that does not suffer from this limitation called EntQA, which stands for Entity linking as Question Answering. EntQA first proposes candidate entities with a fast retrieval module, and then scrutinizes the document to find mentions of each candidate with a powerful reader module. Our approach combines progress in entity linking with that in open-domain question answering and capitalizes on pretrained models for dense entity retrieval and reading comprehension. Unlike in previous works, we do not rely on a mention-candidates dictionary or large-scale weak supervision. EntQA achieves strong results on the GERBIL benchmarking platform.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| AIDA-CoNLL | Zhang et al. (2021) | Micro-F1 strong | 85.8 | — | Unverified |