SOTAVerified

Prospect-theoretic Q-learning

2021-04-12Unverified0· sign in to hype

Vivek S. Borkar, Siddharth Chandak

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider a prospect theoretic version of the classical Q-learning algorithm for discounted reward Markov decision processes, wherein the controller perceives a distorted and noisy future reward, modeled by a nonlinearity that accentuates gains and underrepresents losses relative to a reference point. We analyze the asymptotic behavior of the scheme by analyzing its limiting differential equation and using the theory of monotone dynamical systems to infer its asymptotic behavior. Specifically, we show convergence to equilibria, and establish some qualitative facts about the equilibria themselves.

Tasks

Reproductions