SOTAVerified

Online Observer-Based Inverse Reinforcement Learning

2020-11-03Unverified0· sign in to hype

Ryan Self, Kevin Coleman, He Bai, Rushikesh Kamalapurkar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem. Two observer-based techniques for IRL are developed, including a novel observer method that re-uses previous state estimates via history stacks. Theoretical guarantees for convergence and robustness are established under appropriate excitation conditions. Simulations demonstrate the performance of the developed observers and filters under noisy and noise-free measurements.

Tasks

Reproductions