SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 421430 of 537 papers

TitleStatusHype
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
iNNvestigate neural networks!Code0
Explaining Hyperparameter Optimization via Partial Dependence PlotsCode0
AutoScore-Imbalance: An interpretable machine learning tool for development of clinical scores with rare events dataCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Supersparse Linear Integer Models for Optimized Medical Scoring SystemsCode0
Quantifying and Learning Linear Symmetry-Based DisentanglementCode0
Modelling wildland fire burn severity in California using a spatial Super Learner approachCode0
Show:102550
← PrevPage 43 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified