SOTAVerified

Training Word Sense Embeddings With Lexicon-based Regularization

2017-11-01IJCNLP 2017Unverified0· sign in to hype

Luis Nieto-Pi{\~n}a, Richard Johansson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose to improve word sense embeddings by enriching an automatic corpus-based method with lexicographic data. Information from a lexicon is introduced into the learning algorithm's objective function through a regularizer. The incorporation of lexicographic data yields embeddings that are able to reflect expert-defined word senses, while retaining the robustness, high quality, and coverage of automatic corpus-based methods. These properties are observed in a manual inspection of the semantic clusters that different degrees of regularizer strength create in the vector space. Moreover, we evaluate the sense embeddings in two downstream applications: word sense disambiguation and semantic frame prediction, where they outperform simpler approaches. Our results show that a corpus-based model balanced with lexicographic data learns better representations and improve their performance in downstream tasks.

Tasks

Reproductions