SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 311320 of 537 papers

TitleStatusHype
Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics0
Techniques for Interpretable Machine Learning0
Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification0
Tensor Polynomial Additive Model0
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks0
The Doctor Just Won't Accept That!0
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR0
The Most Important Features in Generalized Additive Models Might Be Groups of Features0
The Partial Response Network: a neural network nomogram0
The Promise and Peril of Human Evaluation for Model Interpretability0
Show:102550
← PrevPage 32 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified