DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames
Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, Dhruv Batra
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/habitat-apiOfficialIn paperpytorch★ 2,899
- github.com/opendilab/DI-enginepytorch★ 3,606
- github.com/jacobkrantz/VLN-CEpytorch★ 748
- github.com/allenai/robothor-challengenone★ 97
- github.com/GT-RIPL/robo-vlnpytorch★ 89
- github.com/yangysc/resinetpytorch★ 10
- github.com/ray-project/ray/tree/master/rllibnone★ 0
- github.com/facebookresearch/habitat-api/tree/master/habitat_baselines/rl/ddppopytorch★ 0
Abstract
We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever stale), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling -- achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) -- over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs. This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially solves the task --near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of ImageNet pre-training + task-specific fine-tuning for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available).
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Habitat 2020 Object Nav test-std | RGBD+DD-PPO | SPL | 0.02 | — | Unverified |