SOTAVerified

Towards Efficient Automatic Self-Pruning of Large Language Models

2025-02-20Unverified0· sign in to hype

Weizhong Huang, Yuxin Zhang, Xiawu Zheng, Fei Chao, Rongrong Ji

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Despite exceptional capabilities, Large Language Models (LLMs) still face deployment challenges due to their enormous size. Post-training structured pruning is a promising solution that prunes LLMs without the need for retraining, reducing computational overhead, and it is hardware-deployment friendly. However, the training-free nature of post-training structured pruning leads to significant performance degradation. We argue that the key to mitigating this issue lies in accurately determining the pruning rate for each layer. Meanwhile, we find that LLMs may have prior knowledge about their own redundancy. Based on this insight, we introduce Self-Pruner an end-to-end automatic self-pruning framework for LLMs, which efficiently search layer-wise pruning rates. Specifically, Self-Pruner leverages LLMs to autonomously execute the entire evolutionary search process to search for pruning rate configurations. In this process, LLMs are used to generate populations, select parent solutions from the current population, and perform crossover and mutation operations to produce offspring solutions. In this way, LLMs automatically generate and evaluate a large number of candidate solutions, effectively converging to find the pruning rate configurations with minimal human intervention. Extensive experiments demonstrate Self-Pruner's better performance compared to existing state-of-the-art methods. Notably, Self-Pruner prunes LLaMA-2-70B to 49B level with only 0.80\% drop in accuracy across seven commonsense reasoning tasks, achieving a 1.39 speedup on NVIDIA A100 80GB GPU. Further pruning to 35B level resulted in only a 3.80\% decrease in accuracy while obtaining a 1.70 speedup.

Tasks

Reproductions