SOTAVerified

Continuous control with deep reinforcement learning

2015-09-09Code Available1· sign in to hype

Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Ant-v4DDPGAverage Return1,712.12Unverified
HalfCheetah-v4DDPGAverage Return14,934.86Unverified
Hopper-v4DDPGAverage Return1,290.24Unverified
Humanoid-v4DDPGAverage Return139.14Unverified
Walker2d-v4DDPGAverage Return2,994.54Unverified

Reproductions