SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101150 of 537 papers

TitleStatusHype
GAMformer: In-Context Learning for Generalized Additive Models0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
A Semiparametric Approach to Interpretable Machine Learning0
Advances in Multiple Instance Learning for Whole Slide Image Analysis: Techniques, Challenges, and Future Directions0
A Scalable Inference Method For Large Dynamic Economic Systems0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning0
A Learning Theoretic Perspective on Local Explainability0
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models0
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
From Correlation to Causation: Formalizing Interpretable Machine Learning as a Statistical Process0
Data-driven Approach for Static Hedging of Exchange Traded Options0
Data-driven model reconstruction for nonlinear wave dynamics0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Decoding Urban-health Nexus: Interpretable Machine Learning Illuminates Cancer Prevalence based on Intertwined City Features0
Deducing neighborhoods of classes from a fitted model0
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Comparing interpretability and explainability for feature selection0
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach0
Are machine learning interpretations reliable? A stability study on global interpretations0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning0
Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
Full interpretable machine learning in 2D with inline coordinates0
CNNs for NLP in the Browser: Client-Side Deployment and Visualization Opportunities0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
A Novel Tropical Geometry-based Interpretable Machine Learning Method: Application in Prognosis of Advanced Heart Failure0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling0
Classification of Skin Cancer Images using Convolutional Neural Networks0
Explanation as a process: user-centric construction of multi-level and multi-modal explanations0
Additive Higher-Order Factorization Machines0
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
Explanations for Automatic Speech Recognition0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Challenges in Variable Importance Ranking Under Correlation0
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset0
Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies0
Causal rule ensemble approach for multi-arm data0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified