Critic Regularized Regression
Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, Nando de Freitas
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/ReAgentpytorch★ 3,690
- github.com/facebookresearch/Horizonpytorch★ 3,686
- github.com/deepmind/rgb_stackingnone★ 129
- github.com/sail-sg/offbenchjax★ 15
- github.com/ray-project/ray/tree/master/rllibnone★ 0
Abstract
Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.