SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 221230 of 537 papers

TitleStatusHype
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems0
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model0
Data-driven Approach for Static Hedging of Exchange Traded Options0
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
Data-driven model reconstruction for nonlinear wave dynamics0
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and Powerful Approach Using Minipatch Ensembles0
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation0
Expanding Mars Climate Modeling: Interpretable Machine Learning for Modeling MSL Relative Humidity0
Show:102550
← PrevPage 23 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified