Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm
Lin Chen, Bruno Scherrer, Peter L. Bartlett
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this paper, we investigate the sample complexity of policy evaluation in infinite-horizon offline reinforcement learning (also known as the off-policy evaluation problem) with linear function approximation. We identify a hard regime d^2>1, where d is the dimension of the feature vector and is the discount rate. In this regime, for any q[^2,1], we can construct a hard instance such that the smallest eigenvalue of its feature covariance matrix is q/d and it requires (d^2(q-^2)^2((d^2))) samples to approximate the value function up to an additive error . Note that the lower bound of the sample complexity is exponential in d. If q=^2, even infinite data cannot suffice. Under the low distribution shift assumption, we show that there is an algorithm that needs at most O(\ ^ _2^4^4d,1^2(d+1)\ ) samples (^ is the parameter of the policy in linear function approximation) and guarantees approximation to the value function up to an additive error of with probability at least 1-.