SOTAVerified

How do Offline Measures for Exploration in Reinforcement Learning behave?

2020-10-29Unverified0· sign in to hype

Jakob J. Hollenstein, Sayantan Auddy, Matteo Saveriano, Erwan Renaudo, Justus Piater

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Sufficient exploration is paramount for the success of a reinforcement learning agent. Yet, exploration is rarely assessed in an algorithm-independent way. We compare the behavior of three data-based, offline exploration metrics described in the literature on intuitive simple distributions and highlight problems to be aware of when using them. We propose a fourth metric,uniform relative entropy, and implement it using either a k-nearest-neighbor or a nearest-neighbor-ratio estimator, highlighting that the implementation choices have a profound impact on these measures.

Tasks

Reproductions