SOTAVerified

Hierarchical model-based policy optimization: from actions to action sequences and back

2019-11-28Unverified0· sign in to hype

Daniel McNamee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We develop a normative framework for hierarchical model-based policy optimization based on applying second-order methods in the space of all possible state-action paths. The resulting natural path gradient performs policy updates in a manner which is sensitive to the long-range correlational structure of the induced stationary state-action densities. We demonstrate that the natural path gradient can be computed exactly given an environment dynamics model and depends on expressions akin to higher-order successor representations. In simulation, we show that the priorization of local policy updates in the resulting policy flow indeed reflects the intuitive state-space hierarchy in several toy problems.

Tasks

Reproductions