SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 301310 of 537 papers

TitleStatusHype
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
Neuro-symbolic Models for Interpretable Time Series Classification using Temporal Logic Description0
Interpretable Machine Learning for Power Systems: Establishing Confidence in SHapley Additive exPlanations0
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset0
Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devicesCode0
Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator0
Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins0
Using Model-Based Trees with Boosting to Fit Low-Order Functional ANOVA Models0
Using Interpretable Machine Learning to Predict Maternal and Fetal Outcomes0
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process0
Show:102550
← PrevPage 31 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified