Anytime Incremental ρPOMDP Planning in Continuous Spaces
Ron Benchetrit, Idan Lev-Yehudi, Andrey Zhitnikov, Vadim Indelman
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Partially Observable Markov Decision Processes (POMDPs) provide a robust framework for decision-making under uncertainty in applications such as autonomous driving and robotic exploration. Their extension, POMDPs, introduces belief-dependent rewards, enabling explicit reasoning about uncertainty. Existing online POMDP solvers for continuous spaces rely on fixed belief representations, limiting adaptability and refinement - critical for tasks such as information-gathering. We present POMCPOW, an anytime solver that dynamically refines belief representations, with formal guarantees of improvement over time. To mitigate the high computational cost of updating belief-dependent rewards, we propose a novel incremental computation approach. We demonstrate its effectiveness for common entropy estimators, reducing computational cost by orders of magnitude. Experimental results show that POMCPOW outperforms state-of-the-art solvers in both efficiency and solution quality.