SOTAVerified

Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp

2024-06-16Code Available0· sign in to hype

Longhao Chen, Yina Zhao, Qiangjun Xie, Qinghua Sheng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This article optimizes the inference performance of the Qwen-1.8B model by performing Int8 quantization, vectorizing some operators in llama.cpp, and modifying the compilation script to improve the compiler optimization level. On the Yitian 710 experimental platform, the prefill performance is increased by 1.6 times, the decoding performance is increased by 24 times, the memory usage is reduced to 1/5 of the original, and the accuracy loss is almost negligible.

Tasks

Reproductions