SOTAVerified

Learning to Fly via Deep Model-Based Reinforcement Learning

2020-03-19Code Available0· sign in to hype

Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications. Yet, reinforcement learning has only achieved limited impact on real-time robot control due to its high demand of real-world interactions. In this work, by leveraging a learnt probabilistic model of drone dynamics, we learn a thrust-attitude controller for a quadrotor through model-based reinforcement learning. No prior knowledge of the flight dynamics is assumed; instead, a sequential latent variable model, used generatively and as an online filter, is learnt from raw sensory input. The controller and value function are optimised entirely by propagating stochastic analytic gradients through generated latent trajectories. We show that "learning to fly" can be achieved with less than 30 minutes of experience with a single drone, and can be deployed solely using onboard computational resources and sensors, on a self-built drone.

Tasks

Reproductions