SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 341350 of 537 papers

TitleStatusHype
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Optimal Counterfactual Explanations in Tree EnsemblesCode1
Interpretable machine learning applied to on-farm biosecurity and porcine reproductive and respiratory syndrome virus0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations0
DISSECT: Disentangled Simultaneous Explanations via Concept TraversalsCode1
An exact counterfactual-example-based approach to tree-ensemble models interpretabilityCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Analysis and classification of main risk factors causing stroke in Shanxi Province0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
Show:102550
← PrevPage 35 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified