SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 141150 of 537 papers

TitleStatusHype
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
iNNvestigate neural networks!Code0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
Show:102550
← PrevPage 15 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified