SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 311320 of 537 papers

TitleStatusHype
Interpretable and Explainable Machine Learning for Materials Science and Chemistry0
A Scalable Inference Method For Large Dynamic Economic Systems0
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon SetCode0
Interpretable Machine Learning for Resource Allocation with Application to Ventilator Triage0
Ranking Facts for Explaining Answers to Elementary Science Questions0
Strategizing University Rank Improvement using Interpretable Machine Learning and Data Visualization0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Shapley variable importance clouds for interpretable machine learningCode1
Multi-Agent Algorithmic Recourse0
Show:102550
← PrevPage 32 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified