SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 151175 of 537 papers

TitleStatusHype
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Individualized Prediction of COVID-19 Adverse outcomes with MLHOCode0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
iNNvestigate neural networks!Code0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
How Your Location Relates to Health: Variable Importance and Interpretable Machine Learning for Environmental and Sociodemographic DataCode0
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin MachineCode0
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in SenegalCode0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
A Generic Approach for Reproducible Model DistillationCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Explainable Representation Learning of Small Quantum StatesCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Big Earth Data and Machine Learning for Sustainable and Resilient AgricultureCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Explaining How Deep Neural Networks Forget by Deep VisualizationCode0
Explaining Hyperparameter Optimization via Partial Dependence PlotsCode0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
GENESIM: genetic extraction of a single, interpretable modelCode0
Show:102550
← PrevPage 7 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified