Periodic Asynchrony: An On-Policy Approach for Accelerating LLM Reinforcement Learning
Jian Lu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Since the introduction of the GRPO algorithm, reinforcement learning (RL) has attracted increasing attention for LLM post-training, yet training efficiency remains a critical challenge. In mainstream RL frameworks, inference and training are co-located on the same devices, and their synchronous execution prevents concurrent inference and training. In this work, we revisit the strategy of separating inference and training deployment, and propose a periodically asynchronous framework that transforms synchronous RL training into an asynchronous producer-consumer pipeline. Unlike existing asynchronous approaches that introduce off-policy bias, our design is provably equivalent to its synchronous counterpart, preserving strict on-policy correctness without any algorithmic modifications. We further introduce a unified tri-model architecture and a shared-prompt attention mechanism to support efficient asynchronous execution and reduce redundant computation. Experiments on NPU platforms demonstrate a three- to five-fold improvement in end-to-end training throughput over mainstream RL frameworks, while maintaining fully comparable accuracy, indicating its potential for widespread application.