SOTAVerified

Finite-Time Error Bounds for Greedy-GQ

2022-09-06Unverified0· sign in to hype

Yue Wang, Yi Zhou, Shaofeng Zou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Greedy-GQ with linear function approximation, originally proposed in maei2010toward, is a value-based off-policy algorithm for optimal control in reinforcement learning, and it has a non-linear two timescale structure with the non-convex objective function. This paper develops its tightest finite-time error bounds. We show that the Greedy-GQ algorithm converges as fast as O(1/T) under the i.i.d.\ setting and O( T/T) under the Markovian setting. We further design a variant of the vanilla Greedy-GQ algorithm using the nested-loop approach, and show that its sample complexity is O((1/)^-2), which matches with the one of the vanilla Greedy-GQ. Our finite-time error bounds match with one of the stochastic gradient descent algorithms for general smooth non-convex optimization problems, despite its additonal challenge in the two time-scale updates. Our finite-sample analysis provides theoretical guidance on choosing step-sizes for faster convergence in practice, and suggests the trade-off between the convergence rate and the quality of the obtained policy. Our techniques provide a general approach for finite-sample analysis of non-convex two timescale value-based reinforcement learning algorithms.

Tasks

Reproductions