SOTAVerified

Large-Scale Study of Curiosity-Driven Learning

2018-08-13ICLR 2019Code Available1· sign in to hype

Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 FreewayIntrinsic Reward AgentScore32.8Unverified
Atari 2600 GravitarIntrinsic Reward AgentScore1,165.1Unverified
Atari 2600 Montezuma's RevengeIntrinsic Reward AgentScore2,504.6Unverified
Atari 2600 Private EyeIntrinsic Reward AgentScore3,036.5Unverified
Atari 2600 VentureIntrinsic Reward AgentScore416Unverified

Reproductions