SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 411420 of 537 papers

TitleStatusHype
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs0
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations0
System Design for a Data-driven and Explainable Customer Sentiment MonitorCode0
Extract Local Inference Chains of Deep Neural Nets0
Multi-type Disentanglement without Adversarial Training0
PANTHER: Pathway Augmented Nonnegative Tensor factorization for HighER-order feature learningCode0
Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data0
Challenging common interpretability assumptions in feature attribution explanationsCode0
Interpretability and Explainability: A Machine Learning Zoo Mini-tour0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Show:102550
← PrevPage 42 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified