ShaRP: SHAllow-LayeR Pruning for Efficient Video Large Language Models
Yingjie Xia, Tao Liu, Jinglei Shi, Qingsong Xie, Heng Guo, Jian Yang, Xi Wang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Video Large Language Models (VLLMs) incur substantial prefilling cost due to the large number of visual tokens. While attention-based token pruning offers a promising acceleration strategy, applying it at shallow decoder layers often causes severe performance degradation under high compression ratios, limiting its practical benefits. In this work, we uncover an overlooked failure mode in shallow-layer attention pruning: attention scores in early decoder layers can become unreliable indicators of token utility, resulting in unstable token selection under aggressive compression. We show that this effect arises from the joint influence of insufficient token interaction, content-agnostic positional bias, and redundancy among high-attention tokens, which together distort attention-based importance estimation before informative representations fully emerge. Motivated by this insight, we propose ShaRP, a unified pruning framework that restores reliable attention-based token selection by jointly improving local information aggregation, calibrating positional bias, and reducing redundancy. Extensive evaluations show that ShaRP preserves about 97.2% of the original performance while reducing TFLOPs by 86% and achieving a 5.1x speedup in the prefilling stage, providing a scalable solution for efficient training-free VLLM inference.