SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 91100 of 537 papers

TitleStatusHype
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
An Additive Instance-Wise Approach to Multi-class Model InterpretationCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
Show:102550
← PrevPage 10 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified