SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 471480 of 537 papers

TitleStatusHype
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Tribe or Not? Critical Inspection of Group Differences Using TribalGram0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins0
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life0
Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization0
Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
META-ANOVA: Screening interactions for interpretable machine learning0
Show:102550
← PrevPage 48 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified