SOTAVerified

Convex Is Back: Solving Belief MDPs With Convexity-Informed Deep Reinforcement Learning

2025-02-13Code Available0· sign in to hype

Daniel Koutas, Daniel Hettegger, Kostas G. Papakonstantinou, Daniel Straub

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel method for Deep Reinforcement Learning (DRL), incorporating the convex property of the value function over the belief space in Partially Observable Markov Decision Processes (POMDPs). We introduce hard- and soft-enforced convexity as two different approaches, and compare their performance against standard DRL on two well-known POMDP environments, namely the Tiger and FieldVisionRockSample problems. Our findings show that including the convexity feature can substantially increase performance of the agents, as well as increase robustness over the hyperparameter space, especially when testing on out-of-distribution domains. The source code for this work can be found at https://github.com/Dakout/Convex_DRL.

Tasks

Reproductions