SOTAVerified

Negation, Coordination, and Quantifiers in Contextualized Language Models

2022-09-16COLING 2022Unverified0· sign in to hype

Aikaterini-Lida Kalouli, Rita Sevastjanova, Christin Beck, Maribel Romero

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

With the success of contextualized language models, much research explores what these models really learn and in which cases they still fail. Most of this work focuses on specific NLP tasks and on the learning outcome. Little research has attempted to decouple the models' weaknesses from specific tasks and focus on the embeddings per se and their mode of learning. In this paper, we take up this research opportunity: based on theoretical linguistic insights, we explore whether the semantic constraints of function words are learned and how the surrounding context impacts their embeddings. We create suitable datasets, provide new insights into the inner workings of LMs vis-a-vis function words and implement an assisting visual web interface for qualitative analysis.

Tasks

Reproductions