SOTAVerified

Incorporating Word Sense Disambiguation in Neural Language Models

2021-06-15Code Available0· sign in to hype

Jan Philip Wahle, Terry Ruas, Norman Meuschke, Bela Gipp

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present two supervised (pre-)training methods to incorporate gloss definitions from lexical resources into neural language models (LMs). The training improves our models' performance for Word Sense Disambiguation (WSD) but also benefits general language understanding tasks while adding almost no parameters. We evaluate our techniques with seven different neural LMs and find that XLNet is more suitable for WSD than BERT. Our best-performing methods exceeds state-of-the-art WSD techniques on the SemCor 3.0 dataset by 0.5% F1 and increase BERT's performance on the GLUE benchmark by 1.1% on average.

Tasks

Reproductions