SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 411420 of 537 papers

TitleStatusHype
Fast Parallel Exact Inference on Bayesian Networks: PosterCode0
Fast classification of small X-ray diffraction datasets using data augmentation and deep neural networksCode0
Causality-based Counterfactual Explanation for Classification ModelsCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
AutoScore-Ordinal: An interpretable machine learning framework for generating scoring models for ordinal outcomesCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic RegressionCode0
Show:102550
← PrevPage 42 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified