SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 176200 of 537 papers

TitleStatusHype
iNNvestigate neural networks!Code0
An Interpretable Approach to Load Profile Forecasting in Power Grids using Galerkin-Approximated Koopman PseudospectraCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Big Earth Data and Machine Learning for Sustainable and Resilient AgricultureCode0
Fast classification of small X-ray diffraction datasets using data augmentation and deep neural networksCode0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Supervised Feature Compression based on Counterfactual AnalysisCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
Comparative Document Summarisation via ClassificationCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival dataCode0
Show:102550
← PrevPage 8 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified