SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101110 of 537 papers

TitleStatusHype
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Conditional Feature Importance for Mixed DataCode0
Contrastive Explanations with Local Foil TreesCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
Show:102550
← PrevPage 11 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified