SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 481490 of 537 papers

TitleStatusHype
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Reliability Scores from Saliency Map Clusters for Improved Image-based Harvest-Readiness Prediction in Cauliflower0
Accurate and Interpretable Machine Learning for Transparent Pricing of Health Insurance Plans0
YASENN: Explaining Neural Networks via Partitioning Activation Sequences0
Explaining Kernel Clustering via Decision Trees0
Explaining Recurrent Neural Network Predictions in Sentiment Analysis0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Explanations for Automatic Speech Recognition0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Show:102550
← PrevPage 49 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified