SOTAVerified

DropoutDAgger: A Bayesian Approach to Safe Imitation Learning

2017-09-18Unverified0· sign in to hype

Kunal Menda, Katherine Driggs-Campbell, Mykel J. Kochenderfer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

While imitation learning is becoming common practice in robotics, this approach often suffers from data mismatch and compounding errors. DAgger is an iterative algorithm that addresses these issues by continually aggregating training data from both the expert and novice policies, but does not consider the impact of safety. We present a probabilistic extension to DAgger, which uses the distribution over actions provided by the novice policy, for a given observation. Our method, which we call DropoutDAgger, uses dropout to train the novice as a Bayesian neural network that provides insight to its confidence. Using the distribution over the novice's actions, we estimate a probabilistic measure of safety with respect to the expert action, tuned to balance exploration and exploitation. The utility of this approach is evaluated on the MuJoCo HalfCheetah and in a simple driving experiment, demonstrating improved performance and safety compared to other DAgger variants and classic imitation learning.

Tasks

Reproductions