SOTAVerified

Leveraging Speculative Sampling and KV-Cache Optimizations Together for Generative AI using OpenVINO

2023-11-08Code Available4· sign in to hype

Haim Barad, Ekaterina Aidova, Yury Gorbachev

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Inference optimizations are critical for improving user experience and reducing infrastructure costs and power consumption. In this article, we illustrate a form of dynamic execution known as speculative sampling to reduce the overall latency of text generation and compare it with standard autoregressive sampling. This can be used together with model-based optimizations (e.g. quantization) to provide an optimized solution. Both sampling methods make use of KV caching. A Jupyter notebook and some sample executions are provided.

Tasks

Reproductions