SOTAVerified

Novel Topological Shapes of Model Interpretability

2020-10-10NeurIPS Workshop TDA_and_Beyond 2020Unverified0· sign in to hype

Hendrik Jacob van Veen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The most accurate models can be the most challenging to interpret. This paper advances interpretability analysis by combining insights from Mapper with recent interpretable machine-learning research. Enforcing new visualization constraints on Mapper, we produce a globally - to locally interpretable visualization of the Explainable Boosting Machine. We demonstrate the usefulness of our approach to three data sets: cervical cancer risk, propaganda Tweets, and a loan default data set that was artificially hardened with severe concept drift.

Tasks

Reproductions