Fast deep reinforcement learning using online adjustments from the past
2018-10-18NeurIPS 2018Code Available0· sign in to hype
Steven Hansen, Pablo Sprechmann, Alexander Pritzel, André Barreto, Charles Blundell
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/deepmind/open_spielnone★ 5,095
- github.com/AnnaNikitaRL/EVApytorch★ 4
Abstract
We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVAis performant on a demonstration task and Atari games.