SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 7180 of 537 papers

TitleStatusHype
GAM Changer: Editing Generalized Additive Models with Interactive VisualizationCode1
Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element NetworksCode1
Born-Again Tree EnsemblesCode1
Generalized and Scalable Optimal Sparse Decision TreesCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Grouped Feature Importance and Combined Features Effect PlotCode1
Hierarchical interpretations for neural network predictionsCode1
ControlBurn: Nonlinear Feature Selection with Sparse Tree EnsemblesCode1
Towards Better Understanding Attribution MethodsCode1
Show:102550
← PrevPage 8 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified