SOTAVerified

A Tractable Algorithm For Finite-Horizon Continuous Reinforcement Learning

2019-06-26Unverified0· sign in to hype

Phanideep Gampa, Sairam Satwik Kondamudi, Lakshmanan Kailasam

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider the finite horizon continuous reinforcement learning problem. Our contribution is three-fold. First,we give a tractable algorithm based on optimistic value iteration for the problem. Next,we give a lower bound on regret of order (T^2/3) for any algorithm discretizes the state space, improving the previous regret bound of (T^1/2) of Ortner and Ryabko contrl for the same problem. Next,under the assumption that the rewards and transitions are H\"older Continuous we show that the upper bound on the discretization error is const.Ln^-T. Finally,we give some simple experiments to validate our propositions.

Tasks

Reproductions