SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 281290 of 537 papers

TitleStatusHype
Predicting Treatment Response in Body Dysmorphic Disorder with Interpretable Machine Learning0
Predictive learning via rule ensembles0
Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems0
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
Quantifying and Learning Disentangled Representations with Limited Supervision0
Ranking Facts for Explaining Answers to Elementary Science Questions0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
Recent advances in interpretable machine learning using structure-based protein representations0
Reconstruction and analysis of negatively buoyant jets with interpretable machine learning0
Reducing Optimism Bias in Incomplete Cooperative Games0
Show:102550
← PrevPage 29 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified