SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 481490 of 537 papers

TitleStatusHype
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
Developing a Fidelity Evaluation Approach for Interpretable Machine LearningCode0
Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch NormalizationCode0
A Generic Approach for Reproducible Model DistillationCode0
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled ExperimentsCode0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
The Reasonable Crowd: Towards evidence-based and interpretable models of driving behaviorCode0
Show:102550
← PrevPage 49 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified