SOTAVerified

The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation

2024-11-06Unverified0· sign in to hype

Lawrence Stewart, Matthew Trager, Sujan Kumar Gonugondla, Stefano Soatto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Speculative decoding aims to speed up autoregressive generation of a language model by verifying in parallel the tokens generated by a smaller draft model.In this work, we explore the effectiveness of learning-free, negligible-cost draft strategies, namely N-grams obtained from the model weights and the context. While the predicted next token of the base model is rarely the top prediction of these simple strategies, we observe that it is often within their top-k predictions for small k. Based on this, we show that combinations of simple strategies can achieve significant inference speedups over different tasks. The overall performance is comparable to more complex methods, yet does not require expensive preprocessing or modification of the base model, and allows for seamless `plug-and-play' integration into pipelines.

Tasks

Reproductions