SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 401410 of 537 papers

TitleStatusHype
Optimizing Binary Decision Diagrams with MaxSAT for classification0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
Overcoming Catastrophic Forgetting by XAI0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Parallel Coordinates for Discovery of Interpretable Machine Learning Models0
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Toward More Generalized Malicious URL Detection Models0
Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal-organic frameworks0
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification0
Show:102550
← PrevPage 41 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified