SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 511520 of 537 papers

TitleStatusHype
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Generally-Occurring Model Change for Robust Counterfactual Explanations0
Comparing interpretability and explainability for feature selection0
AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling0
An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics0
Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images0
CNNs for NLP in the Browser: Client-Side Deployment and Visualization Opportunities0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
Show:102550
← PrevPage 52 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified