SOTAVerified

Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning

2024-05-26Code Available0· sign in to hype

Shangding Gu, Bilgehan Sel, Yuhao Ding, Lu Wang, QIngwei Lin, Alois Knoll, Ming Jin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In numerous reinforcement learning (RL) problems involving safety-critical systems, a key challenge lies in balancing multiple objectives while simultaneously meeting all stringent safety constraints. To tackle this issue, we propose a primal-based framework that orchestrates policy optimization between multi-objective learning and constraint adherence. Our method employs a novel natural policy gradient manipulation method to optimize multiple RL objectives and overcome conflicting gradients between different tasks, since the simple weighted average gradient direction may not be beneficial for specific tasks' performance due to misaligned gradients of different task objectives. When there is a violation of a hard constraint, our algorithm steps in to rectify the policy to minimize this violation. We establish theoretical convergence and constraint violation guarantees in a tabular setting. Empirically, our proposed method also outperforms prior state-of-the-art methods on challenging safe multi-objective reinforcement learning tasks.

Tasks

Reproductions