SOTAVerified

MemBoost: A Memory-Boosted Framework for Cost-Aware LLM Inference

2026-03-27Unverified0· sign in to hype

Joris Köster, Zixuan Liu, Siavash Khajavi, Zizhan Zheng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large Language Models (LLMs) deliver strong performance but incur high inference cost in real-world services, especially under workloads with repeated or near-duplicate queries across users and sessions. In this work, we propose MemBoost, a memory-boosted LLM serving framework that enables a lightweight model to reuse previously generated answers and retrieve relevant supporting information for cheap inference, while selectively escalating difficult or uncertain queries to a stronger model. Unlike standard retrieval-augmented generation, which primarily grounds a single response, MemBoost is designed for interactive settings by supporting answer reuse, continual memory growth, and cost-aware routing. Experiments across multiple models under simulated workloads show that MemBoost substantially reduces expensive large-model invocations and overall inference cost, while maintaining high answer quality comparable to the strong model baseline.

Reproductions