SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 421430 of 537 papers

TitleStatusHype
Predicting Postoperative Stroke in Elderly SICU Patients: An Interpretable Machine Learning Model Using MIMIC Data0
Predicting Treatment Response in Body Dysmorphic Disorder with Interpretable Machine Learning0
Predictive learning via rule ensembles0
Interpretable Machine Learning: Moving From Mythos to Diagnostics0
Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems0
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
An Attention-based Spatio-Temporal Neural Operator for Evolving Physics0
Towards Explaining Hyperparameter Optimization via Partial Dependence Plots0
Analyzing Country-Level Vaccination Rates and Determinants of Practical Capacity to Administer COVID-19 Vaccines0
Towards making NLG a voice for interpretable Machine Learning0
Show:102550
← PrevPage 43 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified