SOTAVerified

Reverse Experience Replay

2019-10-19Unverified0· sign in to hype

Egor Rotinov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper describes an improvement in Deep Q-learning called Reverse Experience Replay (also RER) that solves the problem of sparse rewards and helps to deal with reward maximizing tasks by sampling transitions successively in reverse order. On tasks with enough experience for training and enough Experience Replay memory capacity, Deep Q-learning Network with Reverse Experience Replay shows competitive results against both Double DQN, with a standard Experience Replay, and vanilla DQN. Also, RER achieves significantly increased results in tasks with a lack of experience and Replay memory capacity.

Tasks

Reproductions