Global Convergence of Direct Policy Search for State-Feedback H_ Robust Control: A Revisit of Nonsmooth Synthesis with Goldstein Subdifferential
Xingang Guo, Bin Hu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Direct policy search has been widely applied in modern reinforcement learning and continuous control. However, the theoretical properties of direct policy search on nonsmooth robust control synthesis have not been fully understood. The optimal H_ control framework aims at designing a policy to minimize the closed-loop H_ norm, and is arguably the most fundamental robust control paradigm. In this work, we show that direct policy search is guaranteed to find the global solution of the robust H_ state-feedback control design problem. Notice that policy search for optimal H_ control leads to a constrained nonconvex nonsmooth optimization problem, where the nonconvex feasible set consists of all the policies stabilizing the closed-loop dynamics. We show that for this nonsmooth optimization problem, all Clarke stationary points are global minimum. Next, we identify the coerciveness of the closed-loop H_ objective function, and prove that all the sublevel sets of the resultant policy search problem are compact. Based on these properties, we show that Goldstein's subgradient method and its implementable variants can be guaranteed to stay in the nonconvex feasible set and eventually find the global optimal solution of the H_ state-feedback synthesis problem. Our work builds a new connection between nonconvex nonsmooth optimization theory and robust control, leading to an interesting global convergence result for direct policy search on optimal H_ synthesis.