SOTAVerified

From unbiased MDI Feature Importance to Explainable AI for Trees

2020-03-26Unverified0· sign in to hype

Markus Loecher

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed explainable AI for trees algorithms.

Tasks

Reproductions