SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 431440 of 537 papers

TitleStatusHype
Deducing neighborhoods of classes from a fitted model0
Socio-economic disparities and COVID-19 in the USACode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
On the Use of Interpretable Machine Learning for the Management of Data Quality0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
Relative Feature ImportanceCode0
On quantitative aspects of model interpretability0
Variable Selection via Thompson Sampling0
Show:102550
← PrevPage 44 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified