SOTAVerified

Controlling the Imprint of Passivization and Negation in Contextualized Representations

2020-11-01EMNLP (BlackboxNLP) 2020Code Available0· sign in to hype

Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, Marianna Apidianaki

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Contextualized word representations encode rich information about syntax and semantics, alongside specificities of each context of use. While contextual variation does not always reflect actual meaning shifts, it can still reduce the similarity of embeddings for word instances having the same meaning. We explore the imprint of two specific linguistic alternations, namely passivization and negation, on the representations generated by neural models trained with two different objectives: masked language modeling and translation. Our exploration methodology is inspired by an approach previously proposed for removing societal biases from word vectors. We show that passivization and negation leave their traces on the representations, and that neutralizing this information leads to more similar embeddings for words that should preserve their meaning in the transformation. We also find clear differences in how the respective features generalize across datasets.

Tasks

Reproductions