SOTAVerified

ResT: Reshaping Token-Level Policy Gradients for Tool-Use Large Language Models

2026-02-04Code Available0· sign in to hype

Zihan Lin, Xiaohan Wang, Jie Cao, Jiajun Chai, Guojun Yin, Wei Lin, Ran He

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large language models (LLMs) transcend passive generation and act as goal-directed agents by invoking external tools. Reinforcement learning (RL) offers a principled framework for optimizing these emergent tool-use policies, yet the prevailing paradigm relies exclusively on sparse outcome rewards and lacks consideration of the particularity of tool-use tasks, inflating policy-gradient variance and resulting in inefficient training. To better understand and address these challenges, we first establish a theoretical link between policy entropy and training stability of tool-use tasks, which reveals that structured, low-entropy tokens are primary determinants of rewards. Motivated by this insight, we propose Reshaped Token-level policy gradients (ResT) for tool-use tasks. ResT reshapes the policy gradient through entropy-informed token reweighting, progressively upweighting reasoning tokens as training proceeds. This entropy-aware scheme enables a smooth shift from structural correctness to semantic reasoning and stabilizes convergence in multi-turn tool-use tasks. Evaluation on BFCL and API-Bank shows that ResT achieves state-of-the-art results, outperforming prior methods by up to 8.76\%. When fine-tuned on a 4B base LLM, ResT further surpasses GPT-4o by 4.11\% on single-turn tasks and 1.50\% on multi-turn base tasks. Code is available at https://github.com/1229095296/ResT_Tool_use_LLM.git.

Reproductions