SOTAVerified

RL-finetuning LLMs from on- and off-policy data with a single algorithm

2025-03-25Unverified0· sign in to hype

Yunhao Tang, Taco Cohen, David W. Zhang, Michal Valko, Rémi Munos

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce a novel reinforcement learning algorithm (AGRO, for Any-Generation Reward Optimization) for fine-tuning large-language models. AGRO leverages the concept of generation consistency, which states that the optimal policy satisfies the notion of consistency across any possible generation of the model. We derive algorithms that find optimal solutions via the sample-based policy gradient and provide theoretical guarantees on their convergence. Our experiments demonstrate the effectiveness of AGRO in both on-policy and off-policy settings, showing improved performance on the mathematical reasoning dataset over baseline algorithms.

Tasks

Reproductions