SOTAVerified

Vocabulary shapes cross-lingual variation of word-order learnability in language models

2026-03-19Unverified0· sign in to hype

Jonas Mayer Martins, Jaap Jumelet, Viola Priesemann, Lisa Beinborn

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Why do some languages like Czech permit free word order, while others like English do not? We address this question by pretraining transformer language models on a spectrum of synthetic word-order variants of natural languages. We observe that greater word-order irregularity consistently raises model surprisal, indicating reduced learnability. Sentence reversal, however, affects learnability only weakly. A coarse distinction of free- (e.g., Czech and Finnish) and fixed-word-order languages (e.g., English and French) does not explain cross-lingual variation. Instead, the structure of the word and subword vocabulary strongly predicts the model surprisal. Overall, vocabulary structure emerges as a key driver of computational word-order learnability across languages.

Reproductions