SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 441450 of 537 papers

TitleStatusHype
Interpretable machine learning models: a physics-based view0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Interpretability of machine learning based prediction models in healthcare0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Interpretable Machine Learning Model for Early Prediction of Mortality in Elderly Patients with Multiple Organ Dysfunction Syndrome (MODS): a Multicenter Retrospective Study and Cross Validation0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Show:102550
← PrevPage 45 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified