SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101125 of 537 papers

TitleStatusHype
Contrastive Explanations with Local Foil TreesCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU PatientsCode0
Conditional Feature Importance for Mixed DataCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
Comparative Document Summarisation via ClassificationCode0
ProtoAttend: Attention-Based Prototypical LearningCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
iNNvestigate neural networks!Code0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Developing a Fidelity Evaluation Approach for Interpretable Machine LearningCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Show:102550
← PrevPage 5 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified