SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 401450 of 537 papers

TitleStatusHype
Optimizing Binary Decision Diagrams with MaxSAT for classification0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
Overcoming Catastrophic Forgetting by XAI0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Parallel Coordinates for Discovery of Interpretable Machine Learning Models0
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Toward More Generalized Malicious URL Detection Models0
Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal-organic frameworks0
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification0
Pest presence prediction using interpretable machine learning0
Phononic materials with effectively scale-separated hierarchical features using interpretable machine learning0
Physically interpretable machine learning algorithm on multidimensional non-linear fields0
An Interpretable Machine Learning Approach in Predicting Inflation Using Payments System Data: A Case Study of Indonesia0
Towards Analogy-Based Explanations in Machine Learning0
An interpretable machine learning approach for ferroalloys consumptions0
Towards A Rigorous Science of Interpretable Machine Learning0
A comprehensive interpretable machine learning framework for Mild Cognitive Impairment and Alzheimer's disease diagnosis0
Predicting Hurricane Evacuation Decisions with Interpretable Machine Learning Models0
Predicting Many Crystal Properties via an Adaptive Transformer-based Framework0
Predicting Postoperative Stroke in Elderly SICU Patients: An Interpretable Machine Learning Model Using MIMIC Data0
Predicting Treatment Response in Body Dysmorphic Disorder with Interpretable Machine Learning0
Predictive learning via rule ensembles0
Interpretable Machine Learning: Moving From Mythos to Diagnostics0
Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems0
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
An Attention-based Spatio-Temporal Neural Operator for Evolving Physics0
Towards Explaining Hyperparameter Optimization via Partial Dependence Plots0
Analyzing Country-Level Vaccination Rates and Determinants of Practical Capacity to Administer COVID-19 Vaccines0
Towards making NLG a voice for interpretable Machine Learning0
Analysis and classification of main risk factors causing stroke in Shanxi Province0
Quantifying and Learning Disentangled Representations with Limited Supervision0
Towards Probabilistic Dynamic Security Assessment and Enhancement of Large Power Systems0
A Learning Theoretic Perspective on Local Explainability0
Ranking Facts for Explaining Answers to Elementary Science Questions0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
Recent advances in interpretable machine learning using structure-based protein representations0
Reconstruction and analysis of negatively buoyant jets with interpretable machine learning0
Reducing Optimism Bias in Incomplete Cooperative Games0
Variable Selection via Thompson Sampling0
Diagnostic-free onboard battery health assessment0
Differentiable Genetic Programming for High-dimensional Symbolic Regression0
Discovering Interpretable Machine Learning Models in Parallel Coordinates0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach0
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations0
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Early screening of potential breakthrough technologies with enhanced interpretability: A patent-specific hierarchical attention network model0
Towards Simple Machine Learning Baselines for GNSS RFI Detection0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified