SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 191200 of 537 papers

TitleStatusHype
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
Comparative Document Summarisation via ClassificationCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
Show:102550
← PrevPage 20 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified