SOTAVerified

Understanding Global Feature Contributions With Additive Importance Measures

2020-04-01NeurIPS 2020Code Available1· sign in to hype

Ian Covert, Scott Lundberg, Su-In Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability. To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature. We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature. We then propose SAGE, a model-agnostic method that quantifies predictive power while accounting for feature interactions. Our experiments show that SAGE can be calculated efficiently and that it assigns more accurate importance values than other methods.

Tasks

Reproductions