SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101150 of 537 papers

TitleStatusHype
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU PatientsCode0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
Contrastive Explanations with Local Foil TreesCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf ValuesCode0
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Conditional Feature Importance for Mixed DataCode0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
ProtoAttend: Attention-Based Prototypical LearningCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?Code0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
Comparative Document Summarisation via ClassificationCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
Developing a Fidelity Evaluation Approach for Interpretable Machine LearningCode0
Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine LearningCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
Interpretable Machine Learning for Survival AnalysisCode0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
An Interpretable Approach to Load Profile Forecasting in Power Grids using Galerkin-Approximated Koopman PseudospectraCode0
Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning ModelsCode0
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine LearningCode0
Challenging common interpretability assumptions in feature attribution explanationsCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
CeFlow: A Robust and Efficient Counterfactual Explanation Framework for Tabular Data using Normalizing FlowsCode0
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
iNNvestigate neural networks!Code0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
A Generic Approach for Reproducible Model DistillationCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified