Optimism in Reinforcement Learning with Generalized Linear Function Approximation
2019-12-09ICLR 2021Unverified0· sign in to hype
Yining Wang, Ruosong Wang, Simon S. Du, Akshay Krishnamurthy
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call "optimistic closure," which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound of O(d^3 T) where d is the dimensionality of the state-action features and T is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions.