SOTAVerified

Def2Vec: Extensible Word Embeddings from Dictionary Definitions

2023-12-16ICNLSP 2023Code Available0· sign in to hype

Irene Morazzoni, Vincenzo Scotti, Roberto Tedesco

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Def2Vec introduces a novel paradigm for word embeddings, leveraging dictionary definitions to learn semantic representations. By constructing term-document matrices from definitions and applying Latent Semantic Analysis (LSA), Def2Vec generates embeddings that offer both strong performance and extensibility. In evaluations encompassing Part-of-Speech tagging, Named Entity Recognition, chunking, and semantic similarity, Def2Vec often matches or surpasses state-of-the-art models like Word2Vec, GloVe, and fastText. Our model’s second factorised matrix resulting from LSA enables efficient embedding extension for out-of-vocabulary words. By effectively reconciling the advantages of dictionary definitions with LSA-based embeddings, Def2Vec yields informative semantic representations, especially considering its reduced data requirements. This paper advances the understanding of word embedding generation by incorporating structured lexical information and efficient embedding extension.

Tasks

Reproductions