SOTAVerified

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

2018-08-31WS 2018Unverified0· sign in to hype

Jaap Jumelet, Dieuwke Hupkes

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.

Tasks

Reproductions