SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 411420 of 537 papers

TitleStatusHype
On the Use of Interpretable Machine Learning for the Management of Data Quality0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
Modern Hopfield Networks and Attention for Immune Repertoire ClassificationCode1
Relative Feature ImportanceCode0
On quantitative aspects of model interpretability0
Variable Selection via Thompson Sampling0
Causality Learning: A New Perspective for Interpretable Machine Learning0
Generalized and Scalable Optimal Sparse Decision TreesCode1
Show:102550
← PrevPage 42 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified