SOTAVerified

Toward Computationally Efficient Inverse Reinforcement Learning via Reward Shaping

2023-12-15Unverified0· sign in to hype

Lauren H. Cooke, Harvey Klyne, Edwin Zhang, Cassidy Laidlaw, Milind Tambe, Finale Doshi-Velez

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Inverse reinforcement learning (IRL) is computationally challenging, with common approaches requiring the solution of multiple reinforcement learning (RL) sub-problems. This work motivates the use of potential-based reward shaping to reduce the computational burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will inspire future developments towards computationally efficient IRL.

Tasks

Reproductions