What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
Herman Yau, Chris Russell, Simon Hadfield
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/hmhyau/rl-intentionOfficialIn paperpytorch★ 8
- github.com/lambertinialessandro/RL-FinalProjectnone★ 0
Abstract
We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome. These explanations describe the outcome an agent is trying to achieve by its actions. We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning. Rather, the information needed for the explanations must be collected in conjunction with training the agent. We derive approaches designed to extract local explanations based on intention for several variants of Q-function approximation and prove consistency between the explanations and the Q-values learned. We demonstrate our method on multiple reinforcement learning problems, and provide code to help researchers introspecting their RL environments and algorithms.