SOTAVerified

Omega-Regular Objectives in Model-Free Reinforcement Learning

2018-09-26Unverified0· sign in to hype

Hahn Ernst Moritz, Perez Mateo, Schewe Sven, Somenzi Fabio, Trivedi Ashutosh, Wojtczak Dominik

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We provide the first solution for model-free reinforcement learning of -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of -regular objectives to an almost- sure reachability problem and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. A key feature of our technique is the compilation of -regular properties into limit- deterministic Buechi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.

Tasks

Reproductions