SOTAVerified

DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents

2025-02-04Unverified0· sign in to hype

Shashank Sharma, Janina Hoffmann, Vinay Namboodiri

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Hierarchical Reinforcement Learning (HRL) agents often struggle with long-horizon visual planning due to their reliance on error-prone distance metrics. We propose Discrete Hierarchical Planning (DHP), a method that replaces continuous distance estimates with discrete reachability checks to evaluate subgoal feasibility. DHP recursively constructs tree-structured plans by decomposing long-term goals into sequences of simpler subtasks, using a novel advantage estimation strategy that inherently rewards shorter plans and generalizes beyond training depths. In addition, to address the data efficiency challenge, we introduce an exploration strategy that generates targeted training examples for the planning modules without needing expert data. Experiments in 25-room navigation environments demonstrate 100\% success rate (vs 82\% baseline) and 73-step average episode length (vs 158-step baseline). The method also generalizes to momentum-based control tasks and requires only N steps for replanning. Theoretical analysis and ablations validate our design choices.

Tasks

Reproductions