SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 371380 of 537 papers

TitleStatusHype
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
On Interpretability and Similarity in Concept-Based Machine Learning0
NuCLS: A scalable crowdsourcing, deep learning approach and dataset for nucleus classification, localization and segmentationCode1
Interpretable Predictive Maintenance for Hard Drives0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
TorchPRISM: Principal Image Sections Mapping, a novel method for Convolutional Neural Network features visualizationCode1
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs0
Show:102550
← PrevPage 38 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified