SOTAVerified

InAttention: Linear Context Scaling for Transformers

2024-10-09Unverified0· sign in to hype

Joseph Eisner

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

VRAM requirements for transformer models scale quadratically with context length due to the self-attention mechanism. In this paper we modify the decoder-only transformer, replacing self-attention with InAttention, which scales linearly with context length during inference by having tokens attend only to initial states. Benchmarking shows that InAttention significantly reduces VRAM usage during inference, enabling handling of long sequences on consumer GPUs. We corroborate that fine-tuning extends context length efficiently, improving performance on long sequences without high training costs. InAttention offers a scalable solution for long-range dependencies in transformer models, paving the way for further optimization.

Tasks

Reproductions