SOTAVerified

Scalable Online Exploration via Coverability

2024-03-11Code Available0· sign in to hype

Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Exploration is a major challenge in reinforcement learning, especially for high-dimensional domains that require function approximation. We propose exploration objectives -- policy optimization objectives that enable downstream maximization of any reward function -- as a conceptual framework to systematize the study of exploration. Within this framework, we introduce a new objective, L_1-Coverage, which generalizes previous exploration schemes and supports three fundamental desiderata: 1. Intrinsic complexity control. L_1-Coverage is associated with a structural parameter, L_1-Coverability, which reflects the intrinsic statistical difficulty of the underlying MDP, subsuming Block and Low-Rank MDPs. 2. Efficient planning. For a known MDP, optimizing L_1-Coverage efficiently reduces to standard policy optimization, allowing flexible integration with off-the-shelf methods such as policy gradient and Q-learning approaches. 3. Efficient exploration. L_1-Coverage enables the first computationally efficient model-based and model-free algorithms for online (reward-free or reward-driven) reinforcement learning in MDPs with low coverability. Empirically, we find that L_1-Coverage effectively drives off-the-shelf policy optimization algorithms to explore the state space.

Tasks

Reproductions