SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 201225 of 537 papers

TitleStatusHype
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
GENESIM: genetic extraction of a single, interpretable modelCode0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
iNNvestigate neural networks!Code0
AutoScore-Ordinal: An interpretable machine learning framework for generating scoring models for ordinal outcomesCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
AutoScore-Imbalance: An interpretable machine learning tool for development of clinical scores with rare events dataCode0
An exact counterfactual-example-based approach to tree-ensemble models interpretabilityCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Show:102550
← PrevPage 9 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified