SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 451475 of 537 papers

TitleStatusHype
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Adversarial Attacks and Defenses: An Interpretation Perspective0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
Ontology-based Interpretable Machine Learning for Textual DataCode0
Interpretable machine learning models: a physics-based view0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Interpretability of machine learning based prediction models in healthcare0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Interpretable Machine Learning Model for Early Prediction of Mortality in Elderly Patients with Multiple Organ Dysfunction Syndrome (MODS): a Multicenter Retrospective Study and Cross Validation0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Exploring Interpretability for Predictive Process Analytics0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Topological data analysis of zebrafish patterns0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
MonoNet: Towards Interpretable Models by Learning Monotonic Features0
Interpretable Convolutional Neural Networks for Preterm Birth Classification0
MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of SepsisCode0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
The Partial Response Network: a neural network nomogram0
Show:102550
← PrevPage 19 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified