SOTAVerified

Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics

2021-07-04Unverified0· sign in to hype

Nicolò Botteghi, Mannes Poel, Beril Sirmacek, Christoph Brune

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep Reinforcement Learning has shown its ability in solving complicated problems directly from high-dimensional observations. However, in end-to-end settings, Reinforcement Learning algorithms are not sample-efficient and requires long training times and quantities of data. In this work, we proposed a framework for sample-efficient Reinforcement Learning that take advantage of state and action representations to transform a high-dimensional problem into a low-dimensional one. Moreover, we seek to find the optimal policy mapping latent states to latent actions. Because now the policy is learned on abstract representations, we enforce, using auxiliary loss functions, the lifting of such policy to the original problem domain. Results show that the novel framework can efficiently learn low-dimensional and interpretable state and action representations and the optimal latent policy.

Tasks

Reproductions