SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 361370 of 537 papers

TitleStatusHype
Motif-guided Time Series Counterfactual Explanations0
Multi-Agent Algorithmic Recourse0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
Interpretable Multimodal Machine Learning Analysis of X-ray Absorption Near-Edge Spectra and Pair Distribution Functions0
Multi-type Disentanglement without Adversarial Training0
Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions0
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users0
Near Optimal Decision Trees in a SPLIT Second0
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks0
Neural-ANOVA: Model Decomposition for Interpretable Machine Learning0
Show:102550
← PrevPage 37 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified