SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 426450 of 537 papers

TitleStatusHype
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
An Attention-based Spatio-Temporal Neural Operator for Evolving Physics0
Towards Explaining Hyperparameter Optimization via Partial Dependence Plots0
Analyzing Country-Level Vaccination Rates and Determinants of Practical Capacity to Administer COVID-19 Vaccines0
Towards making NLG a voice for interpretable Machine Learning0
Analysis and classification of main risk factors causing stroke in Shanxi Province0
Quantifying and Learning Disentangled Representations with Limited Supervision0
Towards Probabilistic Dynamic Security Assessment and Enhancement of Large Power Systems0
A Learning Theoretic Perspective on Local Explainability0
Ranking Facts for Explaining Answers to Elementary Science Questions0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
Recent advances in interpretable machine learning using structure-based protein representations0
Reconstruction and analysis of negatively buoyant jets with interpretable machine learning0
Reducing Optimism Bias in Incomplete Cooperative Games0
Variable Selection via Thompson Sampling0
Diagnostic-free onboard battery health assessment0
Differentiable Genetic Programming for High-dimensional Symbolic Regression0
Discovering Interpretable Machine Learning Models in Parallel Coordinates0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach0
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations0
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Early screening of potential breakthrough technologies with enhanced interpretability: A patent-specific hierarchical attention network model0
Towards Simple Machine Learning Baselines for GNSS RFI Detection0
Show:102550
← PrevPage 18 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified