SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 5160 of 537 papers

TitleStatusHype
A comprehensive interpretable machine learning framework for Mild Cognitive Impairment and Alzheimer's disease diagnosis0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Data-driven model reconstruction for nonlinear wave dynamics0
MCCE: Missingness-aware Causal Concept Explainer0
Expert Study on Interpretable Machine Learning Models with Missing Data0
Learning Model Agnostic Explanations via Constraint Programming0
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and InterpretationCode1
Learning local discrete features in explainable-by-design convolutional neural networksCode0
Graph Learning for Numeric PlanningCode1
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation0
Show:102550
← PrevPage 6 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified