SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 121130 of 537 papers

TitleStatusHype
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Comparing interpretability and explainability for feature selection0
Are machine learning interpretations reliable? A stability study on global interpretations0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics0
Show:102550
← PrevPage 13 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified