SOTAVerified

Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update

2018-05-31ICLR 2018Code Available0· sign in to hype

Su Young Lee, Sungik Choi, Sae-Young Chung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation. In contrast to the conventional use of the experience replay with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state to its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode. We theoretically prove the convergence of the EBU method and experimentally demonstrate its performance in both deterministic and stochastic environments. Especially in 49 games of Atari 2600 domain, EBU achieves the same mean and median human normalized performance of DQN by using only 5% and 10% of samples, respectively.

Tasks

Reproductions