SOTAVerified

Do language models plan ahead for future tokens?

2024-04-01Code Available0· sign in to hype

Wilson Wu, John X. Morris, Lionel Levine

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Do transformers "think ahead" during inference at a given position? It is known transformers prepare information in the hidden states of the forward pass at time step t that is then used in future forward passes t+. We posit two explanations for this phenomenon: pre-caching, in which off-diagonal gradient terms present during training result in the model computing features at t irrelevant to the present inference task but useful for the future, and breadcrumbs, in which features most relevant to time step t are already the same as those that would most benefit inference at time t+. We test these hypotheses by training language models without propagating gradients to past timesteps, a scheme we formalize as myopic training. In a constructed synthetic data setting, we find clear evidence for pre-caching. In the autoregressive language modeling setting, our experiments are more suggestive of the breadcrumbs hypothesis, though pre-caching increases with model scale.

Tasks

Reproductions