SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 161170 of 537 papers

TitleStatusHype
A Generic Approach for Reproducible Model DistillationCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Explaining a black-box using Deep Variational Information Bottleneck ApproachCode0
Big Earth Data and Machine Learning for Sustainable and Resilient AgricultureCode0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Explaining How Deep Neural Networks Forget by Deep VisualizationCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Show:102550
← PrevPage 17 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified