SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 476500 of 537 papers

TitleStatusHype
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life0
Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization0
Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
META-ANOVA: Screening interactions for interpretable machine learning0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Reliability Scores from Saliency Map Clusters for Improved Image-based Harvest-Readiness Prediction in Cauliflower0
Accurate and Interpretable Machine Learning for Transparent Pricing of Health Insurance Plans0
YASENN: Explaining Neural Networks via Partitioning Activation Sequences0
Explaining Kernel Clustering via Decision Trees0
Explaining Recurrent Neural Network Predictions in Sentiment Analysis0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Explanations for Automatic Speech Recognition0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Extract Local Inference Chains of Deep Neural Nets0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
Data-driven model reconstruction for nonlinear wave dynamics0
Rethinking Interpretability in the Era of Large Language Models0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
Show:102550
← PrevPage 20 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified