SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 381390 of 537 papers

TitleStatusHype
Using Explainable Boosting Machine to Compare Idiographic and Nomothetic Approaches for Ecological Momentary Assessment Data0
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models0
A hybrid machine learning framework for analyzing human decision making through learning preferences0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
On Explaining Decision Trees0
On Interpretability and Similarity in Concept-Based Machine Learning0
Using Interpretable Machine Learning to Predict Maternal and Fetal Outcomes0
Online Product Feature Recommendations with Interpretable Machine Learning0
On quantitative aspects of model interpretability0
On the definition and importance of interpretability in scientific machine learning0
Show:102550
← PrevPage 39 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified