SOTAVerified

Bias Correction in Deterministic Policy Gradient Using Robust MPC

2021-04-06Unverified0· sign in to hype

Arash Bahari Kordabad, Hossein Nejatbakhsh Esfahani, Sebastien Gros

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we discuss the deterministic policy gradient using the Actor-Critic methods based on the linear compatible advantage function approximator, where the input spaces are continuous. When the policy is restricted by hard constraints, the exploration may not be Centred or Isotropic (non-CI). As a result, the policy gradient estimation can be biased. We focus on constrained policies based on Model Predictive Control (MPC) schemes and to address the bias issue, we propose an approximate Robust MPC approach accounting for the exploration. The RMPC-based policy ensures that a Centered and Isotropic (CI) exploration is approximately feasible. A posterior projection is used to ensure its exact feasibility, we formally prove that this approach does not bias the gradient estimation.

Tasks

Reproductions