SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 126150 of 537 papers

TitleStatusHype
An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
CNNs for NLP in the Browser: Client-Side Deployment and Visualization Opportunities0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
A Novel Tropical Geometry-based Interpretable Machine Learning Method: Application in Prognosis of Advanced Heart Failure0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling0
Explanations for Automatic Speech Recognition0
Classification of Skin Cancer Images using Convolutional Neural Networks0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Additive Higher-Order Factorization Machines0
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks0
Explaining Recurrent Neural Network Predictions in Sentiment Analysis0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Challenges in Variable Importance Ranking Under Correlation0
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset0
Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies0
Causal rule ensemble approach for multi-arm data0
Show:102550
← PrevPage 6 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified