SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 231240 of 537 papers

TitleStatusHype
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Structural Node Embeddings with Homomorphism Counts0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
Improving Clinical Decision Support through Interpretable Machine Learning and Error Handling in Electronic Health Records0
An Interpretable Machine Learning Model with Deep Learning-based Imaging Biomarkers for Diagnosis of Alzheimer's Disease0
Interpretable Machine Learning for Discovery: Statistical Challenges \& Opportunities0
Is Grad-CAM Explainable in Medical Images?0
Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model0
Machine learning and Topological data analysis identify unique features of human papillae in 3D scans0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
Show:102550
← PrevPage 24 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified