SOTAVerified

CacheFocus: Dynamic Cache Re-Positioning for Efficient Retrieval-Augmented Generation

2025-02-16Unverified0· sign in to hype

Kun-Hui Lee, Eunhwan Park, Donghoon Han, Seung-Hoon Na

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large Language Models (LLMs) excel across a variety of language tasks yet are constrained by limited input lengths and high computational costs. Existing approaches such as relative positional encodings (e.g., RoPE, ALiBi) and sliding window mechanisms partially alleviate these issues but often require additional training or suffer from performance degradation with longer inputs. In this paper, we introduce CacheFocus, a method that enhances length normalization and reduces inference latency without any further training. Our approach leverages query-independent, offline caching to efficiently reuse a Context KV Cache Store. We address the amplification of abnormal token distributions problem by re-positioning cached keys and introducing Layer-Adaptive Cache Pruning to discard low-relevance caches during pre-filling. Additionally, our Adaptive Positional Allocation Strategy dynamically reassigns cache positions to maximize the use of the available positional encoding range. Experiments on the Natural Questions and TriviaQA datasets demonstrate that CacheFocus outperforms alternative methods even when inputs exceed the 4K limit of the LLaMA-2 model, emphasizing its practical effectiveness for long-context LLMs. Moreover, even with large maximum input length of Qwen2, the performance of CacheFocus shows that it maintains consistent performance even as the number of documents increases, effectively managing long-text generation without degradation.

Tasks

Reproductions