SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 131140 of 537 papers

TitleStatusHype
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
An Interpretable Approach to Load Profile Forecasting in Power Grids using Galerkin-Approximated Koopman PseudospectraCode0
Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning ModelsCode0
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine LearningCode0
iNNvestigate neural networks!Code0
Challenging common interpretability assumptions in feature attribution explanationsCode0
CeFlow: A Robust and Efficient Counterfactual Explanation Framework for Tabular Data using Normalizing FlowsCode0
Show:102550
← PrevPage 14 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified