Learning to Drive in a Day
Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ZexinLi0w0/R3pytorch★ 5
- github.com/nautilusPrime/autodrive_ddpgnone★ 0
- github.com/araffin/learning-to-drive-in-5-minutespytorch★ 0
- github.com/bryonkucharski/Learning-to-Drive-with-Reinforcement-Learning-and-Variational-Autoencoderspytorch★ 0
- github.com/ankur-rc/autodrive_ddpgnone★ 0
- github.com/bitsauce/Carla-ppotf★ 0
- github.com/B-C-WANG/ReinforcementLearningInAutoPilotnone★ 0
- github.com/bryonkucharski/learning-to-drive-in-a-day-reproductionpytorch★ 0
Abstract
We demonstrate the first application of deep reinforcement learning to autonomous driving. From randomly initialised parameters, our model is able to learn a policy for lane following in a handful of training episodes using a single monocular image as input. We provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control. We use a continuous, model-free deep reinforcement learning algorithm, with all exploration and optimisation performed on-vehicle. This demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision. We discuss the challenges and opportunities to scale this approach to a broader range of autonomous driving tasks.