SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 401450 of 537 papers

TitleStatusHype
Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal-organic frameworks0
Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction TaskCode1
Quantifying and Learning Disentangled Representations with Limited Supervision0
Accurate and Interpretable Machine Learning for Transparent Pricing of Health Insurance Plans0
Interpretable Machine Learning Approaches to Prediction of Chronic HomelessnessCode1
Deducing neighborhoods of classes from a fitted model0
Socio-economic disparities and COVID-19 in the USACode0
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals PredictionCode1
Learning Game-Theoretic Models of Multiagent Trajectories Using Implicit LayersCode1
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
On the Use of Interpretable Machine Learning for the Management of Data Quality0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models0
DeepNNK: Explaining deep models and their generalization using polytope interpolationCode0
Modern Hopfield Networks and Attention for Immune Repertoire ClassificationCode1
Relative Feature ImportanceCode0
On quantitative aspects of model interpretability0
Variable Selection via Thompson Sampling0
Causality Learning: A New Perspective for Interpretable Machine Learning0
Generalized and Scalable Optimal Sparse Decision TreesCode1
How Interpretable and Trustworthy are GAMs?Code1
A Semiparametric Approach to Interpretable Machine Learning0
Using an interpretable Machine Learning approach to study the drivers of International Migration0
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray ImagesCode1
Physically interpretable machine learning algorithm on multidimensional non-linear fields0
Towards Analogy-Based Explanations in Machine Learning0
Interpreting Neural Ranking Models using Grad-CAM0
In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism PredictionCode1
Interpretable Learning-to-Rank with Generalized Additive Models0
Explaining How Deep Neural Networks Forget by Deep VisualizationCode0
Offensive Language Detection ExplainedCode0
Revealing the Phase Diagram of Kitaev Materials by Machine Learning: Cooperation and Competition between Spin LiquidsCode0
Neural Additive Models: Interpretable Machine Learning with Neural NetsCode1
Adversarial Attacks and Defenses: An Interpretation Perspective0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Understanding the decisions of CNNs: An in-model approachCode1
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Ontology-based Interpretable Machine Learning for Textual DataCode0
Born-Again Tree EnsemblesCode1
Interpretable machine learning models: a physics-based view0
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Interpretability of machine learning based prediction models in healthcare0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Interpretable Machine Learning Model for Early Prediction of Mortality in Elderly Patients with Multiple Organ Dysfunction Syndrome (MODS): a Multicenter Retrospective Study and Cross Validation0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
Extending Class Activation Mapping Using Gaussian Receptive Field0
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified