SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 110 of 537 papers

TitleStatusHype
GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraintsCode5
SmoothGrad: removing noise by adding noiseCode4
Learning Important Features Through Propagating Activation DifferencesCode4
PiML Toolbox for Interpretable Machine Learning Model Development and DiagnosticsCode3
Temporal Fusion Transformers for Interpretable Multi-horizon Time Series ForecastingCode3
Prompt-CAM: A Simpler Interpretable Transformer for Fine-Grained AnalysisCode2
Interpretable Machine Learning for Science with PySR and SymbolicRegression.jlCode2
OmniXAI: A Library for Explainable AICode2
Designing Inherently Interpretable Machine Learning ModelsCode2
Neurosymbolic Association Rule Mining from Tabular DataCode1
Show:102550
← PrevPage 1 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified