SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 401410 of 537 papers

TitleStatusHype
Style-transfer counterfactual explanations: An application to mortality prevention of ICU patientsCode0
Full-Gradient Representation for Neural Network VisualizationCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Understanding Neural Networks Through Deep VisualizationCode0
Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable Machine LearningCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
Supervised Feature Compression based on Counterfactual AnalysisCode0
MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of SepsisCode0
Show:102550
← PrevPage 41 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified