SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 461470 of 537 papers

TitleStatusHype
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Exploring Interpretability for Predictive Process Analytics0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Topological data analysis of zebrafish patterns0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
MonoNet: Towards Interpretable Models by Learning Monotonic Features0
Show:102550
← PrevPage 47 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified