Benchmarking Batch Deep Reinforcement Learning Algorithms
Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, Joelle Pineau
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/MLforHealth/rl_representationspytorch★ 57
- github.com/thxsxth/POMDP_RLSepsispytorch★ 26
- github.com/maziarg/PrivAttack-BCQpytorch★ 0
- github.com/sfujim/BCQpytorch★ 0
- github.com/SwarajPawar/Discrete-BCQpytorch★ 0
Abstract
Widely-used deep reinforcement learning algorithms have been shown to fail in the batch setting--learning from a fixed data set without interaction with the environment. Following this result, there have been several papers showing reasonable performances under a variety of environments and batch settings. In this paper, we benchmark the performance of recent off-policy and batch reinforcement learning algorithms under unified settings on the Atari domain, with data generated by a single partially-trained behavioral policy. We find that under these conditions, many of these algorithms underperform DQN trained online with the same amount of data, as well as the partially-trained behavioral policy. To introduce a strong baseline, we adapt the Batch-Constrained Q-learning algorithm to a discrete-action setting, and show it outperforms all existing algorithms at this task.