SOTAVerified

Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics

2024-04-30Code Available0· sign in to hype

James A. Michaelov, Catherine Arnett, Benjamin K. Bergen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformers have generally supplanted recurrent neural networks as the dominant architecture for both natural language processing tasks and for modelling the effect of predictability on online human language comprehension. However, two recently developed recurrent model architectures, RWKV and Mamba, appear to perform natural language tasks comparably to or better than transformers of equivalent scale. In this paper, we show that contemporary recurrent models are now also able to match - and in some cases, exceed - the performance of comparably sized transformers at modeling online human language comprehension. This suggests that transformer language models are not uniquely suited to this task, and opens up new directions for debates about the extent to which architectural features of language models make them better or worse models of human language comprehension.

Tasks

Reproductions