SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 461470 of 537 papers

TitleStatusHype
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
The Partial Response Network: a neural network nomogram0
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch NormalizationCode0
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning0
Improving performance of deep learning models with axiomatic attribution priors and expected gradientsCode1
Model Bridging: Connection between Simulation Model and Neural Network0
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks0
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network RobustnessCode0
Show:102550
← PrevPage 47 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified