SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 7180 of 537 papers

TitleStatusHype
Neural Additive Models: Interpretable Machine Learning with Neural NetsCode1
Understanding the decisions of CNNs: An in-model approachCode1
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Born-Again Tree EnsemblesCode1
Understanding Deep Networks via Extremal Perturbations and Smooth MasksCode1
Improving performance of deep learning models with axiomatic attribution priors and expected gradientsCode1
Disentangled Attribution Curves for Interpreting Random Forests and Boosted TreesCode1
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc InterpretabilityCode1
Interpretable machine learning: definitions, methods, and applicationsCode1
RISE: Randomized Input Sampling for Explanation of Black-box ModelsCode1
Show:102550
← PrevPage 8 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified