SOTAVerified

Language models align with human judgments on key grammatical constructions

2024-01-19Code Available0· sign in to hype

Jennifer Hu, Kyle Mahowald, Gary Lupyan, Anna Ivanova, Roger Levy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Do large language models (LLMs) make human-like linguistic generalizations? Dentella et al. (2023) ("DGL") prompt several LLMs ("Is the following sentence grammatically correct in English?") to elicit grammaticality judgments of 80 English sentences, concluding that LLMs demonstrate a "yes-response bias" and a "failure to distinguish grammatical from ungrammatical sentences". We re-evaluate LLM performance using well-established practices and find that DGL's data in fact provide evidence for just how well LLMs capture human behaviors. Models not only achieve high accuracy overall, but also capture fine-grained variation in human linguistic judgments.

Tasks

Reproductions