SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101110 of 537 papers

TitleStatusHype
Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data0
Establishing Nationwide Power System Vulnerability Index across US Counties Using Interpretable Machine Learning0
A Semiparametric Approach to Interpretable Machine Learning0
Advances in Multiple Instance Learning for Whole Slide Image Analysis: Techniques, Challenges, and Future Directions0
A Scalable Inference Method For Large Dynamic Economic Systems0
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition0
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning0
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning0
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning0
A Learning Theoretic Perspective on Local Explainability0
Show:102550
← PrevPage 11 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified