SOTAVerified

Plan-Based Asymptotically Equivalent Reward Shaping

2021-01-01ICLR 2021Unverified0· sign in to hype

Ingmar Schubert, Ozgur S Oguz, Marc Toussaint

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration. This issue has been addressed using potential-based reward shaping (PB-RS) previously. In the present work, we introduce Asymptotically Equivalent Reward Shaping (ASEQ-RS). ASEQ-RS relaxes the strict optimality guarantees of PB-RS to a guarantee of asymptotic equivalence. Being less restrictive, ASEQ-RS allows for reward shaping functions that are even better suited for improving the sample efficiency of RL algorithms. In particular, we consider settings in which the agent has access to an approximate plan. Here, we use examples of simulated robotic manipulation tasks to demonstrate that plan-based ASEQ-RS can indeed significantly improve the sample efficiency of RL over plan-based PB-RS.

Tasks

Reproductions