SOTAVerified

Rainbow: Combining Improvements in Deep Reinforcement Learning

2017-10-06Code Available3· sign in to hype

Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 Ms. PacmanRainbowScore2,570.2Unverified
Atari 2600 Space InvadersRainbowScore12,629Unverified
Atari-57Rainbow DQNMean Human Normalized Score873.97Unverified
atari gameRainbowHuman World Record Breakthrough4Unverified
Atari gamesRainbow DQNMean Human Normalized Score873.97Unverified

Reproductions