SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 376400 of 537 papers

TitleStatusHype
Full interpretable machine learning in 2D with inline coordinates0
Discovering Interpretable Machine Learning Models in Parallel Coordinates0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
Interpretable machine learning applied to on-farm biosecurity and porcine reproductive and respiratory syndrome virus0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations0
An exact counterfactual-example-based approach to tree-ensemble models interpretabilityCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Analysis and classification of main risk factors causing stroke in Shanxi Province0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
Towards Explaining Hyperparameter Optimization via Partial Dependence Plots0
Comparing interpretability and explainability for feature selection0
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning0
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled ExperimentsCode0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Causality-based Counterfactual Explanation for Classification ModelsCode0
Online Product Feature Recommendations with Interpretable Machine Learning0
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
Towards Rigorous Interpretations: a Formalisation of Feature AttributionCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structureCode0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges0
Show:102550
← PrevPage 16 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified