SOTAVerified

Versatile Inverse Reinforcement Learning via Cumulative Rewards

2021-11-15Unverified0· sign in to hype

Niklas Freymuth, Philipp Becker, Gerhard Neumann

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Inverse Reinforcement Learning infers a reward function from expert demonstrations, aiming to encode the behavior and intentions of the expert. Current approaches usually do this with generative and uni-modal models, meaning that they encode a single behavior. In the common setting, where there are various solutions to a problem and the experts show versatile behavior this severely limits the generalization capabilities of these methods. We propose a novel method for Inverse Reinforcement Learning that overcomes these problems by formulating the recovered reward as a sum of iteratively trained discriminators. We show on simulated tasks that our approach is able to recover general, high-quality reward functions and produces policies of the same quality as behavioral cloning approaches designed for versatile behavior.

Tasks

Reproductions