SOTAVerified

What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes

2020-11-10NeurIPS 2020Code Available0· sign in to hype

Herman Yau, Chris Russell, Simon Hadfield

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome. These explanations describe the outcome an agent is trying to achieve by its actions. We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning. Rather, the information needed for the explanations must be collected in conjunction with training the agent. We derive approaches designed to extract local explanations based on intention for several variants of Q-function approximation and prove consistency between the explanations and the Q-values learned. We demonstrate our method on multiple reinforcement learning problems, and provide code to help researchers introspecting their RL environments and algorithms.

Tasks

Reproductions