SOTAVerified

Generalizations across filler-gap dependencies in neural language models

2024-10-23Code Available0· sign in to hype

Katherine Howitt, Sathvik Nair, Allison Dods, Robert Melvin Hopkins

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Humans develop their grammars by making structural generalizations from finite input. We ask how filler-gap dependencies, which share a structural generalization despite diverse surface forms, might arise from the input. We explicitly control the input to a neural language model (NLM) to uncover whether the model posits a shared representation for filler-gap dependencies. We show that while NLMs do have success differentiating grammatical from ungrammatical filler-gap dependencies, they rely on superficial properties of the input, rather than on a shared generalization. Our work highlights the need for specific linguistic inductive biases to model language acquisition.

Tasks

Reproductions