SOTAVerified

Optimizing LLM Inference for Database Systems: Cost-Aware Scheduling for Concurrent Requests

2024-11-12Unverified0· sign in to hype

Kyoungmin Kim, Kijae Hong, Caglar Gulcehre, Anastasia Ailamaki

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

LLMs are increasingly used inside database systems and in database applications for better complexity management and decision-making, where LLM inferences require significant GPU costs. LLM inference systems, however, are slow compared to database systems, limiting the expansion of the use of LLMs inside database systems. This paper first analyzes the LLM inference performance and focuses on a data management issue in LLM inference. We reveal that the root of the problem is the lack of an adequate resource cost model and optimization strategy when executing multiple concurrent inference requests. We adapt classic database multi-query optimization techniques by introducing cost models for concurrent inference requests and new scheduling strategies to optimize the use of memory resources by concurrent requests, thereby substantially improving performance.

Tasks

Reproductions