SOTAVerified

Hierarchical Universal Value Function Approximators

2024-10-11Unverified0· sign in to hype

Rushiv Arora

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

There have been key advancements to building universal approximators for multi-goal collections of reinforcement learning value functions -- key elements in estimating long-term returns of states in a parameterized manner. We extend this to hierarchical reinforcement learning, using the options framework, by introducing hierarchical universal value function approximators (H-UVFAs). This allows us to leverage the added benefits of scaling, planning, and generalization expected in temporal abstraction settings. We develop supervised and reinforcement learning methods for learning embeddings of the states, goals, options, and actions in the two hierarchical value functions: Q(s, g, o; ) and Q(s, g, o, a; ). Finally we demonstrate generalization of the HUVFAs and show they outperform corresponding UVFAs.

Tasks

Reproductions