SOTAVerified

How Transformers Get Rich: Approximation and Dynamics Analysis

2024-10-15Unverified0· sign in to hype

Mingze Wang, Ruoxi Yu, Weinan E, Lei Wu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Transformers have demonstrated exceptional in-context learning capabilities, yet the theoretical understanding of the underlying mechanisms remains limited. A recent work (Elhage et al., 2021) identified a ``rich'' in-context mechanism known as induction head, contrasting with ``lazy'' n-gram models that overlook long-range dependencies. In this work, we provide both approximation and dynamics analyses of how transformers implement induction heads. In the approximation analysis, we formalize both standard and generalized induction head mechanisms, and examine how transformers can efficiently implement them, with an emphasis on the distinct role of each transformer submodule. For the dynamics analysis, we study the training dynamics on a synthetic mixed target, composed of a 4-gram and an in-context 2-gram component. This controlled setting allows us to precisely characterize the entire training process and uncover an abrupt transition from lazy (4-gram) to rich (induction head) mechanisms as training progresses.

Tasks

Reproductions