SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 76100 of 537 papers

TitleStatusHype
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionCode1
DISSECT: Disentangled Simultaneous Explanations via Concept TraversalsCode1
Do Feature Attribution Methods Correctly Attribute Features?Code1
ExeKGLib: Knowledge Graphs-Empowered Machine Learning AnalyticsCode1
Improving Accuracy of Interpretability Measures in Hyperparameter Optimization via Bayesian Algorithm ExecutionCode1
Take 5: Interpretable Image Classification with a Handful of FeaturesCode1
Neural Prototype Trees for Interpretable Fine-grained Image RecognitionCode1
Anomaly Detection in Time Series with Triadic Motif Fields and Application in Atrial Fibrillation ECG ClassificationCode1
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
ProtoAttend: Attention-Based Prototypical LearningCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
An Additive Instance-Wise Approach to Multi-class Model InterpretationCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
Show:102550
← PrevPage 4 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified