SOTAVerified

Fitted Q-iteration in continuous action-space MDPs

2007-12-01NeurIPS 2007Unverified0· sign in to hype

András Antos, Csaba Szepesvári, Rémi Munos

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by another policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous theoretical analysis of this algorithm, proving what we believe is the first finite-time bounds for value-function based algorithms for continuous state- and action-space problems.

Tasks

Reproductions