SOTAVerified

Diminishing Return of Value Expansion Methods

2024-12-29Code Available1· sign in to hype

Daniel Palenicek, Michael Lutter, João Carvalho, Daniel Dennert, Faran Ahmad, Jan Peters

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Model-based reinforcement learning aims to increase sample efficiency, but the accuracy of dynamics models and the resulting compounding errors are often seen as key limitations. This paper empirically investigates potential sample efficiency gains from improved dynamics models in model-based value expansion methods. Our study reveals two key findings when using oracle dynamics models to eliminate compounding errors. First, longer rollout horizons enhance sample efficiency, but the improvements quickly diminish with each additional expansion step. Second, increased model accuracy only marginally improves sample efficiency compared to learned models with identical horizons. These diminishing returns in sample efficiency are particularly noteworthy when compared to model-free value expansion methods. These model-free algorithms achieve comparable performance without the computational overhead. Our results suggest that the limitation of model-based value expansion methods cannot be attributed to model accuracy. Although higher accuracy is beneficial, even perfect models do not provide unrivaled sample efficiency. Therefore, the bottleneck exists elsewhere. These results challenge the common assumption that model accuracy is the primary constraint in model-based reinforcement learning.

Tasks

Reproductions