SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 501510 of 537 papers

TitleStatusHype
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU PatientsCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Learning local discrete features in explainable-by-design convolutional neural networksCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
SIBILA: A novel interpretable ensemble of general-purpose machine learning models applied to medical contextsCode0
Contrastive Explanations with Local Foil TreesCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Feature-based Learning for Diverse and Privacy-Preserving Counterfactual ExplanationsCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Show:102550
← PrevPage 51 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified