moolib: A Platform for Distributed RL
Vegard Mella, Eric Hambro, Danielle Rothermel, Heinrich Küttler
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/moolibIn paperpytorch★ 366
Abstract
We present moolib, a library that enables the implementation of distributed reinforcement learning and other machine learning codebases. Our implementation aims to be both simple and scalable, targeting researchers with a wide range of available computing resources, e.g., from individual to hundreds of GPUs. moolib is build around efficient remote procedure calls (RPCs) for both tensor and non-tensor data. Together with the moolib library, we present example user code which shows how moolib’s components can be used to implement common reinforcement learning agents as a simple but scalable distributed network of homogeneous peers. Together with this whitepaper, moolib and its examples are provided as open source code at our repository at github.com/facebookresearch/moolib.