SOTAVerified

Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation

2022-10-03Unverified0· sign in to hype

Dan Qiao, Yu-Xiang Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the reward-free exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension d and planning horizon H, we propose a new algorithm that collects at most O(d^2H^5^2) trajectories within H deployments to identify -optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal d dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.

Tasks

Reproductions