SOTAVerified

Generalized Maximum Causal Entropy for Inverse Reinforcement Learning

2019-11-16Unverified0· sign in to hype

Tien Mai, Kennard Chan, Patrick Jaillet

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider the problem of learning from demonstrated trajectories with inverse reinforcement learning (IRL). Motivated by a limitation of the classical maximum entropy model in capturing the structure of the network of states, we propose an IRL model based on a generalized version of the causal entropy maximization problem, which allows us to generate a class of maximum entropy IRL models. Our generalized model has an advantage of being able to recover, in addition to a reward function, another expert's function that would (partially) capture the impact of the connecting structure of the states on experts' decisions. Empirical evaluation on a real-world dataset and a grid-world dataset shows that our generalized model outperforms the classical ones, in terms of recovering reward functions and demonstrated trajectories.

Tasks

Reproductions