SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 401410 of 537 papers

TitleStatusHype
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
Interpretable Machine Learning: Moving From Mythos to Diagnostics0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
On Interpretability and Similarity in Concept-Based Machine Learning0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Interpretable Predictive Maintenance for Hard Drives0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
[Re] Explaining Groups of Points in Low-Dimensional RepresentationsCode0
Show:102550
← PrevPage 41 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified