SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 451460 of 537 papers

TitleStatusHype
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Adversarial Attacks and Defenses: An Interpretation Perspective0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
Ontology-based Interpretable Machine Learning for Textual DataCode0
Interpretable machine learning models: a physics-based view0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Interpretability of machine learning based prediction models in healthcare0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Interpretable Machine Learning Model for Early Prediction of Mortality in Elderly Patients with Multiple Organ Dysfunction Syndrome (MODS): a Multicenter Retrospective Study and Cross Validation0
Show:102550
← PrevPage 46 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified