SOTAVerified

Regret Bounds for Reinforcement Learning via Markov Chain Concentration

2018-08-06Unverified0· sign in to hype

Ronald Ortner

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We give a simple optimistic algorithm for which it is easy to derive regret bounds of O(t_ mix SAT) after T steps in uniformly ergodic Markov decision processes with S states, A actions, and mixing time parameter t_ mix. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.

Tasks

Reproductions