SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 151200 of 537 papers

TitleStatusHype
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
Feature-based Learning for Diverse and Privacy-Preserving Counterfactual ExplanationsCode0
Explainable Representation Learning of Small Quantum StatesCode0
Learning local discrete features in explainable-by-design convolutional neural networksCode0
Kernel Learning Assisted Synthesis Condition Exploration for Ternary SpinelCode0
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf ValuesCode0
Leveraging Predictive Equivalence in Decision TreesCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
Explaining a black-box using Deep Variational Information Bottleneck ApproachCode0
Interpreting Machine Learning Malware Detectors Which Leverage N-gram AnalysisCode0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Explaining How Deep Neural Networks Forget by Deep VisualizationCode0
Explaining Hyperparameter Optimization via Partial Dependence PlotsCode0
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in SenegalCode0
Explaining Recurrent Neural Network Predictions in Sentiment AnalysisCode0
Challenging common interpretability assumptions in feature attribution explanationsCode0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning ModelsCode0
A Generic Approach for Reproducible Model DistillationCode0
Interpretable Machine Learning for Survival AnalysisCode0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
Fast classification of small X-ray diffraction datasets using data augmentation and deep neural networksCode0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
iNNvestigate neural networks!Code0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Big Earth Data and Machine Learning for Sustainable and Resilient AgricultureCode0
Supervised Feature Compression based on Counterfactual AnalysisCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Comparative Document Summarisation via ClassificationCode0
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Ontology-based Interpretable Machine Learning for Textual DataCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Dynamic Model Tree for Interpretable Data Stream LearningCode0
Show:102550
← PrevPage 4 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified