SOTAVerified

Density-Based Bonuses on Learned Representations for Reward-Free Exploration in Deep Reinforcement Learning

2021-06-13ICML Workshop URL 2021Unverified0· sign in to hype

Omar Darwiche Domingues, Corentin Tallec, Remi Munos, Michal Valko

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we study the problem of representation learning and exploration in reinforcement learning. We propose a framework to compute exploration bonuses based on density estimation, that can be used with any representation learning method, and that allows the agent to explore without extrinsic rewards. In the special case of tabular Markov decision processes (MDPs), this approach mimics the behavior of theoretically sound algorithms. In continuous and partially observable MDPs, the same approach can be applied by learning a latent representation, on which a probability density is estimated.

Tasks

Reproductions