SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 8190 of 537 papers

TitleStatusHype
Hierarchical interpretations for neural network predictionsCode1
A Unified Approach to Interpreting Model PredictionsCode1
Axiomatic Attribution for Deep NetworksCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
"Why Should I Trust You?": Explaining the Predictions of Any ClassifierCode1
Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis0
The Most Important Features in Generalized Additive Models Might Be Groups of Features0
Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images0
Leveraging Predictive Equivalence in Decision TreesCode0
Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders0
Show:102550
← PrevPage 9 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified