SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 476500 of 537 papers

TitleStatusHype
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
Ontology-based Interpretable Machine Learning for Textual DataCode0
X Hacking: The Threat of Misguided AutoMLCode0
Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structureCode0
Selecting Robust Features for Machine Learning Applications using Multidata Causal DiscoveryCode0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
Developing a Fidelity Evaluation Approach for Interpretable Machine LearningCode0
Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch NormalizationCode0
A Generic Approach for Reproducible Model DistillationCode0
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled ExperimentsCode0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
The Reasonable Crowd: Towards evidence-based and interpretable models of driving behaviorCode0
Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?Code0
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf ValuesCode0
Kernel Learning Assisted Synthesis Condition Exploration for Ternary SpinelCode0
PANTHER: Pathway Augmented Nonnegative Tensor factorization for HighER-order feature learningCode0
The (Un)reliability of saliency methodsCode0
Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devicesCode0
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon SetCode0
Perceptual Musical Features for Interpretable Audio TaggingCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
Show:102550
← PrevPage 20 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified