SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 201210 of 537 papers

TitleStatusHype
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Show:102550
← PrevPage 21 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified