SOTAVerified

Implicit In-Context Learning: Evidence from Artificial Language Experiments

2025-03-31Unverified0· sign in to hype

Xiaomeng Ma, Qihui Xu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While LLMs demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology, morphosyntax, and syntax to systematically evaluate implicit learning at inferencing level in two state-of-the-art OpenAI models: gpt-4o and o3-mini. Our results reveal linguistic domain-specific alignment between models and human behaviors, o3-mini aligns better in morphology while both models align in syntax.

Tasks

Reproductions