SOTAVerified

Lagrangian-based online safe reinforcement learning for state-constrained systems

2023-05-22Unverified0· sign in to hype

Soutrik Bandyopadhyay, Shubhendu Bhasin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper proposes a safe reinforcement learning (RL) algorithm that approximately solves the state-constrained optimal control problem for continuous-time uncertain nonlinear systems. We formulate the safe RL problem as the minimization of a Lagrangian that includes the cost functional and a user-defined barrier Lyapunov function (BLF) encoding the state constraints. We show that the analytical solution obtained by the application of Karush-Kuhn-Tucker (KKT) conditions contains a state-dependent expression for the Lagrange multiplier, which is a function of uncertain terms in the system dynamics. We argue that a naive estimation of the Lagrange multiplier may lead to safety constraint violations. To obviate this challenge, we propose an Actor-Critic-Identifier-Lagrangian (ACIL) algorithm that learns optimal control policies from online data without compromising safety. We provide safety and boundedness guarantees with the proposed algorithm and compare its performance with existing offline/online RL methods via a simulation study.

Tasks

Reproductions