SOTAVerified

Marvel: Accelerating Safe Online Reinforcement Learning with Finetuned Offline Policy

2024-12-05Code Available0· sign in to hype

Keru Chen, Honghao Wei, Zhigang Deng, Sen Lin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The high costs and risks involved in extensive environment interactions hinder the practical application of current online safe reinforcement learning (RL) methods. While offline safe RL addresses this by learning policies from static datasets, the performance therein is usually limited due to reliance on data quality and challenges with out-of-distribution (OOD) actions. Inspired by recent successes in offline-to-online (O2O) RL, it is crucial to explore whether offline safe RL can be leveraged to facilitate faster and safer online policy learning, a direction that has yet to be fully investigated. To fill this gap, we first demonstrate that naively applying existing O2O algorithms from standard RL would not work well in the safe RL setting due to two unique challenges: erroneous Q-estimations, resulted from offline-online objective mismatch and offline cost sparsity, and Lagrangian mismatch, resulted from difficulties in aligning Lagrange multipliers between offline and online policies. To address these challenges, we introduce Marvel, a novel framework for O2O safe RL, comprising two key components that work in concert: Value Pre-Alignment to align the Q-functions with the underlying truth before online learning, and Adaptive PID Control to effectively adjust the Lagrange multipliers during online finetuning. Extensive experiments demonstrate that Marvel significantly outperforms existing baselines in both reward maximization and safety constraint satisfaction. By introducing the first policy-finetuning based framework for O2O safe RL, which is compatible with many offline and online safe RL methods, our work has the great potential to advance the field towards more efficient and practical safe RL solutions.

Tasks

Reproductions