Model-based Offline Reinforcement Learning with Count-based Conservatism
Byeongchan Kim, Min-hwan Oh
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/oh-lab/count-morlOfficialIn paperpytorch★ 6
Abstract
In this paper, we propose a model-based offline reinforcement learning method that integrates count-based conservatism, named Count-MORL. Our method utilizes the count estimates of state-action pairs to quantify model estimation error, marking the first algorithm of demonstrating the efficacy of count-based conservatism in model-based offline deep RL to the best of our knowledge. For our proposed method, we first show that the estimation error is inversely proportional to the frequency of state-action pairs. Secondly, we demonstrate that the learned policy under the count-based conservative model offers near-optimality performance guarantees. Through extensive numerical experiments, we validate that Count-MORL with hash code implementation significantly outperforms existing offline RL algorithms on the D4RL benchmark datasets. The code is accessible at https://github.com/oh-lab/Count-MORLhttps://github.com/oh-lab/Count-MORL.