SOTAVerified

Considerations When Learning Additive Explanations for Black-Box Models

2018-01-26ICLR 2019Code Available0· sign in to hype

Sarah Tan, Giles Hooker, Paul Koch, Albert Gordo, Rich Caruana

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many methods to explain black-box models, whether local or global, are additive. In this paper, we study global additive explanations for non-additive models, focusing on four explanation methods: partial dependence, Shapley explanations adapted to a global setting, distilled additive explanations, and gradient-based explanations. We show that different explanation methods characterize non-additive components in a black-box model's prediction function in different ways. We use the concepts of main and total effects to anchor additive explanations, and quantitatively evaluate additive and non-additive explanations. Even though distilled explanations are generally the most accurate additive explanations, non-additive explanations such as tree explanations that explicitly model non-additive components tend to be even more accurate. Despite this, our user study showed that machine learning practitioners were better able to leverage additive explanations for various tasks. These considerations should be taken into account when considering which explanation to trust and use to explain black-box models.

Tasks

Reproductions