SOTAVerified

DSPO: Stable and Efficient Policy Optimization for Agentic Search and Reasoning

2026-03-19Unverified0· sign in to hype

Chenyang Gu, Yewen Pu, Bruce Yang, Xiaofan Li, Huan Gao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Enhancing LLMs with the ability to actively search external knowledge is crucial for complex and real-world tasks. Current approaches either rely on prompting to elicit the model's innate agent capabilities, or suffer from performance ceilings and collapse when applying RL to complex interactive tasks, leaving their true agentic potential untapped. To address this, we introduce Dynamic-filter Sequence-level Policy Optimization (DSPO), an improved RL algorithm designed for robust agent training through sequence-level optimization and dynamic sample filtering. We train our model purely through RL to interleave multi-turn search and reasoning, obviating the need for supervised demonstration data. Across multiple QA benchmarks, our 7B model improves over a comparable previous work by 34.1\%, and even outperforms the 14B model from previous work in complex multihop QA such as HotpotQA by nearly 9\% relative, maintaining exceptional training stability.

Reproductions