SOTAVerified

Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies

2022-10-04Unverified0· sign in to hype

Rui Yuan, Simon S. Du, Robert M. Gower, Alessandro Lazaric, Lin Xiao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class. Using the compatible function approximation framework, both methods with log-linear policies can be written as inexact versions of the policy mirror descent (PMD) method. We show that both methods attain linear convergence rates and O(1/^2) sample complexities using a simple, non-adaptive geometrically increasing step size, without resorting to entropy or other strongly convex regularization. Lastly, as a byproduct, we obtain sublinear convergence rates for both methods with arbitrary constant step size.

Tasks

Reproductions