Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings
Ming Yin, Yu-Xiang Wang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This work studies the statistical limits of uniform convergence for offline policy evaluation (OPE) problems with model-based methods (for episodic MDP) and provides a unified framework towards optimal learning for several well-motivated offline tasks. Uniform OPE _|Q^-Q^|< is a stronger measure than the point-wise OPE and ensures offline learning when contains all policies (the global class). In this paper, we establish an (H^2 S/d_m^2) lower bound (over model-based family) for the global uniform OPE and our main result establishes an upper bound of O(H^2/d_m^2) for the local uniform convergence that applies to all near-empirically optimal policies for the MDPs with stationary transition. Here d_m is the minimal marginal state-action probability. Critically, the highlight in achieving the optimal rate O(H^2/d_m^2) is our design of singleton absorbing MDP, which is a new sharp analysis tool that works with the model-based approach. We generalize such a model-based framework to the new settings: offline task-agnostic and the offline reward-free with optimal complexity O(H^2(K)/d_m^2) (K is the number of tasks) and O(H^2S/d_m^2) respectively. These results provide a unified solution for simultaneously solving different offline RL problems.