SOTAVerified

Is Q-learning an Ill-posed Problem?

2025-02-20Unverified0· sign in to hype

Philipp Wissmann, Daniel Hein, Steffen Udluft, Thomas Runkler

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper investigates the instability of Q-learning in continuous environments, a challenge frequently encountered by practitioners. Traditionally, this instability is attributed to bootstrapping and regression model errors. Using a representative reinforcement learning benchmark, we systematically examine the effects of bootstrapping and model inaccuracies by incrementally eliminating these potential error sources. Our findings reveal that even in relatively simple benchmarks, the fundamental task of Q-learning - iteratively learning a Q-function from policy-specific target values - can be inherently ill-posed and prone to failure. These insights cast doubt on the reliability of Q-learning as a universal solution for reinforcement learning problems.

Tasks

Reproductions