SOTAVerified

Human Sentence Processing: Recurrence or Attention?

2020-05-19NAACL (CMCL) 2021Code Available1· sign in to hype

Danny Merkx, Stefan L. Frank

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models' ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.

Tasks

Reproductions