SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 176200 of 537 papers

TitleStatusHype
Causal Dependence Plots0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Extract Local Inference Chains of Deep Neural Nets0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
Category-Specific Topological Learning of Metal-Organic Frameworks0
An Interpretable Machine Learning Model with Deep Learning-based Imaging Biomarkers for Diagnosis of Alzheimer's Disease0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Full interpretable machine learning in 2D with inline coordinates0
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
Integrating White and Black Box Techniques for Interpretable Machine Learning0
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
GAMformer: In-Context Learning for Generalized Additive Models0
Cardiotocogram Biomedical Signal Classification and Interpretation for Fetal Health Evaluation0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
Explainable AI using expressive Boolean formulas0
Show:102550
← PrevPage 8 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified