SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 151200 of 537 papers

TitleStatusHype
Interpretability of machine learning based prediction models in healthcare0
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
Challenges in Variable Importance Ranking Under Correlation0
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset0
Insights into the origin of halo mass profiles from machine learning0
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation0
Causal rule ensemble approach for multi-arm data0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Integrating White and Black Box Techniques for Interpretable Machine Learning0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations0
Causal Entropy and Information Gain for Measuring Causal Control0
Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization0
An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models0
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model0
Causality Learning: A New Perspective for Interpretable Machine Learning0
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life0
Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins0
Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies0
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and Powerful Approach Using Minipatch Ensembles0
Causal Dependence Plots0
Explaining Kernel Clustering via Decision Trees0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Explanations for Automatic Speech Recognition0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Extract Local Inference Chains of Deep Neural Nets0
Category-Specific Topological Learning of Metal-Organic Frameworks0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
An Interpretable Machine Learning Model with Deep Learning-based Imaging Biomarkers for Diagnosis of Alzheimer's Disease0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Full interpretable machine learning in 2D with inline coordinates0
Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models0
Interpretable Learning-to-Rank with Generalized Additive Models0
Cardiotocogram Biomedical Signal Classification and Interpretation for Fetal Health Evaluation0
GAMformer: In-Context Learning for Generalized Additive Models0
GAM(L)A: An econometric model for interpretable Machine Learning0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
Explainable AI using expressive Boolean formulas0
Show:102550
← PrevPage 4 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified