SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 451500 of 537 papers

TitleStatusHype
Explainable Representation Learning of Small Quantum StatesCode0
CeFlow: A Robust and Efficient Counterfactual Explanation Framework for Tabular Data using Normalizing FlowsCode0
Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear RegressionCode0
NFISiS: New Perspectives on Fuzzy Inference Systems for Renewable Energy ForecastingCode0
Relative Feature ImportanceCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Offensive Language Detection ExplainedCode0
Interpretable Machine Learning for Survival AnalysisCode0
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in SenegalCode0
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network RobustnessCode0
Big Earth Data and Machine Learning for Sustainable and Resilient AgricultureCode0
REPID: Regional Effect Plots with implicit Interaction DetectionCode0
Re-Ranking Words to Improve Interpretability of Automatically Generated TopicsCode0
Visualization of Convolutional Neural Networks for Monocular Depth EstimationCode0
Online Learning of Decision Trees with Thompson SamplingCode0
"What is Relevant in a Text Document?": An Interpretable Machine Learning ApproachCode0
Revealing the Phase Diagram of Kitaev Materials by Machine Learning: Cooperation and Competition between Spin LiquidsCode0
Towards Rigorous Interpretations: a Formalisation of Feature AttributionCode0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
Worth of knowledge in deep learningCode0
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
Ontology-based Interpretable Machine Learning for Textual DataCode0
X Hacking: The Threat of Misguided AutoMLCode0
Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structureCode0
Selecting Robust Features for Machine Learning Applications using Multidata Causal DiscoveryCode0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
Developing a Fidelity Evaluation Approach for Interpretable Machine LearningCode0
Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch NormalizationCode0
A Generic Approach for Reproducible Model DistillationCode0
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled ExperimentsCode0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
The Reasonable Crowd: Towards evidence-based and interpretable models of driving behaviorCode0
Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?Code0
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf ValuesCode0
Kernel Learning Assisted Synthesis Condition Exploration for Ternary SpinelCode0
PANTHER: Pathway Augmented Nonnegative Tensor factorization for HighER-order feature learningCode0
The (Un)reliability of saliency methodsCode0
Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devicesCode0
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon SetCode0
Perceptual Musical Features for Interpretable Audio TaggingCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
Show:102550
← PrevPage 10 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified