SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 526537 of 537 papers

TitleStatusHype
Socio-economic disparities and COVID-19 in the USACode0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Comparative Document Summarisation via ClassificationCode0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Manipulating and Measuring Model InterpretabilityCode0
An exact counterfactual-example-based approach to tree-ensemble models interpretabilityCode0
Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine LearningCode0
Margin Optimal Classification TreesCode0
Understanding Interventional TreeSHAP : How and Why it WorksCode0
An Additive Instance-Wise Approach to Multi-class Model InterpretationCode0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
Probing hidden spin order with interpretable machine learningCode0
Show:102550
← PrevPage 22 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified