SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 526537 of 537 papers

TitleStatusHype
"What is Relevant in a Text Document?": An Interpretable Machine Learning ApproachCode0
Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems0
GENESIM: genetic extraction of a single, interpretable modelCode0
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance0
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Meaningful Models: Utilizing Conceptual Structure to Improve Machine Learning Interpretability0
Interpretable Machine Learning Models for the Digital Clock Drawing Test0
Interpretable Two-level Boolean Rule Learning for Classification0
"Why Should I Trust You?": Explaining the Predictions of Any ClassifierCode1
Understanding Neural Networks Through Deep VisualizationCode0
Supersparse Linear Integer Models for Optimized Medical Scoring SystemsCode0
Predictive learning via rule ensembles0
Show:102550
← PrevPage 22 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified