SOTAVerified

Target Network and Truncation Overcome The Deadly Triad in Q-Learning

2022-03-05Unverified0· sign in to hype

Zaiwei Chen, John Paul Clarke, Siva Theja Maguluri

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Q-learning with function approximation is one of the most empirically successful while theoretically mysterious reinforcement learning (RL) algorithms, and was identified in Sutton (1999) as one of the most important theoretical open problems in the RL community. Even in the basic linear function approximation setting, there are well-known divergent examples. In this work, we show that target network and truncation together are enough to provably stabilize Q-learning with linear function approximation, and we establish the finite-sample guarantees. The result implies an O(^-2) sample complexity up to a function approximation error. Moreover, our results do not require strong assumptions or modifying the problem parameters as in existing literature.

Tasks

Reproductions