SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 261270 of 537 papers

TitleStatusHype
Interpretable Machine Learning for Power Systems: Establishing Confidence in SHapley Additive exPlanations0
Interpretable machine-learning for predicting molecular weight of PLA based on artificial bee colony optimization algorithm and adaptive neurofuzzy inference system0
Interpretable Machine Learning for Privacy-Preserving Pervasive Systems0
Interpretable Machine Learning for Resource Allocation with Application to Ventilator Triage0
Brain Age from the Electroencephalogram of Sleep0
Interpretable Machine Learning for Self-Service High-Risk Decision-Making0
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations0
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning0
Interpretable Machine Learning for Weather and Climate Prediction: A Survey0
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges0
Show:102550
← PrevPage 27 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified