SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 291300 of 537 papers

TitleStatusHype
Adversarial Attacks and Defenses: An Interpretation Perspective0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
Interpreting Neural Ranking Models using Grad-CAM0
Exploring Interpretability for Predictive Process Analytics0
Investigating Role of Personal Factors in Shaping Responses to Active Shooter Incident using Machine Learning0
Automated Learning of Interpretable Models with Quantified Uncertainty0
Is Grad-CAM Explainable in Medical Images?0
An interpretable machine learning system for colorectal cancer diagnosis from pathology slides0
Advancing Tabular Stroke Modelling Through a Novel Hybrid Architecture and Feature-Selection Synergy0
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity0
Show:102550
← PrevPage 30 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified