SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 5160 of 537 papers

TitleStatusHype
Optimal Counterfactual Explanations in Tree EnsemblesCode1
DISSECT: Disentangled Simultaneous Explanations via Concept TraversalsCode1
Interpretable machine learning for high-dimensional trajectories of aging healthCode1
Do Feature Attribution Methods Correctly Attribute Features?Code1
Grouped Feature Importance and Combined Features Effect PlotCode1
NuCLS: A scalable crowdsourcing, deep learning approach and dataset for nucleus classification, localization and segmentationCode1
TorchPRISM: Principal Image Sections Mapping, a novel method for Convolutional Neural Network features visualizationCode1
Anomaly Detection in Time Series with Triadic Motif Fields and Application in Atrial Fibrillation ECG ClassificationCode1
Neural Prototype Trees for Interpretable Fine-grained Image RecognitionCode1
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Show:102550
← PrevPage 6 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified