Order-Optimal Instance-Dependent Bounds for Offline Reinforcement Learning with Preference Feedback
Zhirui Chen, Vincent Y. F. Tan
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider offline reinforcement learning (RL) with preference feedback in which the implicit reward is a linear function of an unknown parameter. Given an offline dataset, our objective consists in ascertaining the optimal action for each state, with the ultimate goal of minimizing the simple regret. We propose an algorithm, RL with Locally Optimal Weights or RL-LOW, which yields a simple regret of ( - (n/H) ) where n is the number of data samples and H denotes an instance-dependent hardness quantity that depends explicitly on the suboptimality gap of each action. Furthermore, we derive a first-of-its-kind instance-dependent lower bound in offline RL with preference feedback. Interestingly, we observe that the lower and upper bounds on the simple regret match order-wise in the exponent, demonstrating order-wise optimality of RL-LOW. In view of privacy considerations in practical applications, we also extend RL-LOW to the setting of (,)-differential privacy and show, somewhat surprisingly, that the hardness parameter H is unchanged in the asymptotic regime as n tends to infinity; this underscores the inherent efficiency of RL-LOW in terms of preserving the privacy of the observed rewards. Given our focus on establishing instance-dependent bounds, our work stands in stark contrast to previous works that focus on establishing worst-case regrets for offline RL with preference feedback.