SOTAVerified

Overcoming Poor Word Embeddings with Word Definitions

2021-03-05Joint Conference on Lexical and Computational SemanticsUnverified0· sign in to hype

Christopher Malon

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are more challenging for natural language inference models. Then we explore how a model could learn to use definitions, provided in natural text, to overcome this handicap. Our model's understanding of a definition is usually weaker than a well-modeled word embedding, but it recovers most of the performance gap from using a completely untrained word.

Tasks

Reproductions