SOTAVerified

Non-Deterministic Policy Improvement Stabilizes Approximated Reinforcement Learning

2016-12-22Unverified0· sign in to hype

Wendelin Böhmer, Rong Guo, Klaus Obermayer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements' stochasticity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning.

Tasks

Reproductions