SOTAVerified

Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning

2022-01-14Code Available0· sign in to hype

Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists. We compare model-free, model-based, as well as hybrid offline RL approaches on various industrial benchmark (IB) datasets to test the algorithms in settings closer to real world problems, including complex noise and partially observable states. We find that on the IB, hybrid approaches face severe difficulties and that simpler algorithms, such as rollout based algorithms or model-free algorithms with simpler regularizers perform best on the datasets.

Tasks

Reproductions