SOTAVerified

Loop Neural Networks for Parameter Sharing

2024-09-21Unverified0· sign in to hype

Kei-Sing Ng, Qingchen Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The success of large-scale language models like GPT can be attributed to their ability to efficiently predict the next token in a sequence. However, these models rely on constant computational effort regardless of the complexity of the token they are predicting, lacking the capacity for iterative refinement. In this paper, we introduce a novel Loop Neural Network, which achieves better performance by utilizing longer computational time without increasing the model size. Our approach revisits the input multiple times, refining the prediction by iteratively looping over a subset of the model with residual connections. We demonstrate the effectiveness of this method through experiments comparing versions of GPT-2 with our loop models, showing improved performance in language modeling tasks while maintaining similar parameter counts. Importantly, these improvements are achieved without the need for extra training data.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
OpenWebTextGPT2-81M-LOOPeval_loss3.11Unverified

Reproductions