SOTAVerified

Markov flow policy -- deep MC

2024-05-01Unverified0· sign in to hype

Nitsan Soffair, Gilad Katz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Discounted algorithms often encounter evaluation errors due to their reliance on short-term estimations, which can impede their efficacy in addressing simple, short-term tasks and impose undesired temporal discounts (\( \)). Interestingly, these algorithms are often tested without applying a discount, a phenomenon we refer as the train-test bias. In response to these challenges, we propose the Markov Flow Policy, which utilizes a non-negative neural network flow to enable comprehensive forward-view predictions. Through integration into the TD7 codebase and evaluation using the MuJoCo benchmark, we observe significant performance improvements, positioning MFP as a straightforward, practical, and easily implementable solution within the domain of average rewards algorithms.

Tasks

Reproductions