Transformer tricks: Precomputing the first layer
2024-02-20Code Available2· sign in to hype
Nils Graef
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/openmachine-ai/transformer-tricksOfficialIn paperpytorch★ 197
Abstract
This micro-paper describes a trick to speed up inference of transformers with RoPE (such as LLaMA, Mistral, PaLM, and Gemma). For these models, a large portion of the first transformer layer can be precomputed, which results in slightly lower latency and lower cost-per-token. Because this trick optimizes only one layer, the relative savings depend on the total number of layers. For example, the maximum savings for a model with only 4 layers (such as Whisper tiny) is limited to 25%, while a 32-layer model is limited to 3% savings. See https://github.com/OpenMachine-ai/transformer-tricks for code and more transformer tricks.