SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 151175 of 537 papers

TitleStatusHype
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
Causal rule ensemble approach for multi-arm data0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Generally-Occurring Model Change for Robust Counterfactual Explanations0
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations0
An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models0
GAM(L)A: An econometric model for interpretable Machine Learning0
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life0
Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins0
Causal Dependence Plots0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Category-Specific Topological Learning of Metal-Organic Frameworks0
Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization0
Causal Entropy and Information Gain for Measuring Causal Control0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
Causality Learning: A New Perspective for Interpretable Machine Learning0
An Interpretable Machine Learning Model with Deep Learning-based Imaging Biomarkers for Diagnosis of Alzheimer's Disease0
Hidden Citations Obscure True Impact in Science0
Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Cardiotocogram Biomedical Signal Classification and Interpretation for Fetal Health Evaluation0
Explaining Recurrent Neural Network Predictions in Sentiment Analysis0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Show:102550
← PrevPage 7 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified