SOTAVerified

Improving Biomedical Pretrained Language Models with Knowledge

2021-04-21NAACL (BioNLP) 2021Code Available1· sign in to hype

Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pretrained language models have shown success in many natural language processing tasks. Many works explore incorporating knowledge into language models. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BC2GMKeBioLMF185.1Unverified
BC5CDR-chemicalKeBioLMF193.3Unverified
BC5CDR-diseaseKeBioLMF186.1Unverified
JNLPBAKeBioLMF182Unverified
NCBI DiseaseKeBioLMF189.1Unverified

Reproductions