SOTAVerified

Reinforcement Learning under Threats

2018-09-05Code Available0· sign in to hype

Victor Gallego, Roi Naveiro, David Rios Insua

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In several reinforcement learning (RL) scenarios, mainly in security settings, there may be adversaries trying to interfere with the reward generating process. In this paper, we introduce Threatened Markov Decision Processes (TMDPs), which provide a framework to support a decision maker against a potential adversary in RL. Furthermore, we propose a level-k thinking scheme resulting in a new learning framework to deal with TMDPs. After introducing our framework and deriving theoretical results, relevant empirical evidence is given via extensive experiments, showing the benefits of accounting for adversaries while the agent learns.

Tasks

Reproductions