SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 331340 of 537 papers

TitleStatusHype
Machine Learning-Based Prediction of Mortality in Geriatric Traumatic Brain Injury Patients0
Machine Learning for Economic Forecasting: An Application to China's GDP Growth0
MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence0
Additive Higher-Order Factorization Machines0
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models0
Tensor Polynomial Additive Model0
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks0
MAntRA: A framework for model agnostic reliability analysis0
The Doctor Just Won't Accept That!0
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR0
Show:102550
← PrevPage 34 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified