SOTAVerified

Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings

2021-10-30Unverified0· sign in to hype

Matthew S. Zhang, Murat A. Erdogdu, Animesh Garg

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Policy gradient methods have been frequently applied to problems in control and reinforcement learning with great success, yet existing convergence analysis still relies on non-intuitive, impractical and often opaque conditions. In particular, existing rates are achieved in limited settings, under strict regularity conditions. In this work, we establish explicit convergence rates of policy gradient methods, extending the convergence regime to weakly smooth policy classes with L_2 integrable gradient. We provide intuitive examples to illustrate the insight behind these new conditions. Notably, our analysis also shows that convergence rates are achievable for both the standard policy gradient and the natural policy gradient algorithms under these assumptions. Lastly we provide performance guarantees for the converged policies.

Tasks

Reproductions