SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 426450 of 537 papers

TitleStatusHype
AutoScore-Imbalance: An interpretable machine learning tool for development of clinical scores with rare events dataCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Supersparse Linear Integer Models for Optimized Medical Scoring SystemsCode0
Quantifying and Learning Linear Symmetry-Based DisentanglementCode0
Modelling wildland fire burn severity in California using a spatial Super Learner approachCode0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
ProtoAttend: Attention-Based Prototypical LearningCode0
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networksCode0
System Design for a Data-driven and Explainable Customer Sentiment MonitorCode0
Explaining How Deep Neural Networks Forget by Deep VisualizationCode0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
Interpretable Explanations of Black Boxes by Meaningful PerturbationCode0
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine LearningCode0
Verifying Properties of Tsetlin MachinesCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Explaining Groups of Points in Low-Dimensional RepresentationsCode0
An Interpretable Approach to Load Profile Forecasting in Power Grids using Galerkin-Approximated Koopman PseudospectraCode0
Signed iterative random forests to identify enhancer-associated transcription factor bindingCode0
Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning ModelsCode0
Explaining a black-box using Deep Variational Information Bottleneck ApproachCode0
Neural Network Pruning by Gradient DescentCode0
Challenging common interpretability assumptions in feature attribution explanationsCode0
Regularizing Black-box Models for Improved InterpretabilityCode0
Show:102550
← PrevPage 18 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified