SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 251260 of 537 papers

TitleStatusHype
Interpretable Machine Learning based on Functional ANOVA Framework: Algorithms and Comparisons0
Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction0
Integrating White and Black Box Techniques for Interpretable Machine Learning0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Insights into the origin of halo mass profiles from machine learning0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models0
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and Powerful Approach Using Minipatch Ensembles0
Show:102550
← PrevPage 26 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified