SOTAVerified

Policy Optimization for H_2 Linear Control with H_ Robustness Guarantee: Implicit Regularization and Global Convergence

2020-06-08L4DC 2020Unverified0· sign in to hype

Kaiqing Zhang, Bin Hu, Tamer Basar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Policy optimization (PO) is a key ingredient for modern reinforcement learning (RL). For control design, certain constraints are usually enforced on the policies to optimize, accounting for either the stability, robustness, or safety concerns on the system. Hence, PO is by nature a constrained (nonconvex) optimization in most cases, whose global convergence is challenging to analyze in general. More importantly, some constraints that are safety-critical, e.g., the closed-loop stability, or the H_-norm constraint that guarantees the system robustness, can be difficult to enforce on the controller being learned as the PO methods proceed. In this paper, we study the convergence theory of PO for H_2 linear control with H_ robustness guarantee. This general framework includes risk-sensitive linear control as a special case. One significant new feature of this problem, in contrast to the standard H_2 linear control, namely, linear quadratic regulator (LQR) problems, is the lack of coercivity of the cost function. This makes it challenging to guarantee the feasibility, namely, the H_ robustness, of the iterates. Interestingly, we propose two PO algorithms that enjoy the implicit regularization property, i.e., the iterates preserve the H_ robustness, as if they are regularized by the algorithms. Furthermore, convergence to the globally optimal policies with globally sublinear and locally (super-)linear rates are provided under certain conditions, despite the nonconvexity of the problem. To the best of our knowledge, our work offers the first results on the implicit regularization property and global convergence of PO methods for robust/risk-sensitive control.

Tasks

Reproductions