SOTAVerified

Direct Uncertainty Estimation in Reinforcement Learning

2013-06-06Unverified0· sign in to hype

Sergey Rodionov, Alexey Potapov, Yurii Vinogradov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Optimal probabilistic approach in reinforcement learning is computationally infeasible. Its simplification consisting in neglecting difference between true environment and its model estimated using limited number of observations causes exploration vs exploitation problem. Uncertainty can be expressed in terms of a probability distribution over the space of environment models, and this uncertainty can be propagated to the action-value function via Bellman iterations, which are computationally insufficiently efficient though. We consider possibility of directly measuring uncertainty of the action-value function, and analyze sufficiency of this facilitated approach.

Tasks

Reproductions