SOTAVerified

Implicit representations of event properties within contextual language models: Searching for “causativity neurons”

2021-06-01IWCS (ACL) 2021Code Available0· sign in to hype

Esther Seyffarth, Younes Samih, Laura Kallmeyer, Hassan Sajjad

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties. More concretely, the paper shows that the neuron activations obtained from processing an English sentence provide discriminative features for predicting the (non-)causativity of the event denoted by the verb in a simple linear classifier. A layer-wise analysis reveals that the relevant properties are mostly learned in the higher layers. Moreover, further experiments show that appr. 10% of the neuron activations are enough to already predict causativity with a relatively high accuracy.

Tasks

Reproductions