SOTAVerified

Inferring Probabilistic Reward Machines from Non-Markovian Reward Processes for Reinforcement Learning

2021-07-09Unverified0· sign in to hype

Taylor Dohmen, Noah Topper, George Atia, Andre Beckus, Ashutosh Trivedi, Alvaro Velasquez

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The success of reinforcement learning in typical settings is predicated on Markovian assumptions on the reward signal by which an agent learns optimal policies. In recent years, the use of reward machines has relaxed this assumption by enabling a structured representation of non-Markovian rewards. In particular, such representations can be used to augment the state space of the underlying decision process, thereby facilitating non-Markovian reinforcement learning. However, these reward machines cannot capture the semantics of stochastic reward signals. In this paper, we make progress on this front by introducing probabilistic reward machines (PRMs) as a representation of non-Markovian stochastic rewards. We present an algorithm to learn PRMs from the underlying decision process and prove results around its correctness and convergence.

Tasks

Reproductions