SOTAVerified

WALL-E: An Efficient Reinforcement Learning Research Framework

2019-01-18Code Available0· sign in to hype

Tianbing Xu, Andrew Zhang, Liang Zhao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

There are two halves to RL systems: experience collection time and policy learning time. For a large number of samples in rollouts, experience collection time is the major bottleneck. Thus, it is necessary to speed up the rollout generation time with multi-process architecture support. Our work, dubbed WALL-E, utilizes multiple rollout samplers running in parallel to rapidly generate experience. Due to our parallel samplers, we experience not only faster convergence times, but also higher average reward thresholds. For example, on the MuJoCo HalfCheetah-v2 task, with N = 10 parallel sampler processes, we are able to achieve much higher average return than those from using only a single process architecture.

Tasks

Reproductions