SOTAVerified

Intra-Layer Recurrence in Transformers for Language Modeling

2025-05-03Code Available0· sign in to hype

Anthony Nguyen, Wenjun Lin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer models have established new benchmarks in natural language processing; however, their increasing depth results in substantial growth in parameter counts. While existing recurrent transformer methods address this issue by reprocessing layers multiple times, they often apply recurrence indiscriminately across entire blocks of layers. In this work, we investigate Intra-Layer Recurrence (ILR), a more targeted approach that applies recurrence selectively to individual layers within a single forward pass. Our experiments show that allocating more iterations to earlier layers yields optimal results. These findings suggest that ILR offers a promising direction for optimizing recurrent structures in transformer architectures.

Tasks

Reproductions