SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 181190 of 537 papers

TitleStatusHype
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
Cardiotocogram Biomedical Signal Classification and Interpretation for Fetal Health Evaluation0
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Show:102550
← PrevPage 19 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified