SOTAVerified

Evaluating task-agnostic exploration for fixed-batch learning of arbitrary future tasks

2019-11-20Code Available0· sign in to hype

Vibhavari Dasagi, Robert Lee, Jake Bruce, Jürgen Leitner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep reinforcement learning has been shown to solve challenging tasks where large amounts of training experience is available, usually obtained online while learning the task. Robotics is a significant potential application domain for many of these algorithms, but generating robot experience in the real world is expensive, especially when each task requires a lengthy online training procedure. Off-policy algorithms can in principle learn arbitrary tasks from a diverse enough fixed dataset. In this work, we evaluate popular exploration methods by generating robotics datasets for the purpose of learning to solve tasks completely offline without any further interaction in the real world. We present results on three popular continuous control tasks in simulation, as well as continuous control of a high-dimensional real robot arm. Code documenting all algorithms, experiments, and hyper-parameters is available at https://github.com/qutrobotlearning/batchlearning.

Tasks

Reproductions