SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 511520 of 537 papers

TitleStatusHype
CNNs for NLP in the Browser: Client-Side Deployment and Visualization Opportunities0
Brain Age from the Electroencephalogram of Sleep0
Probing hidden spin order with interpretable machine learningCode0
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models0
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis0
Manipulating and Measuring Model InterpretabilityCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine LearningCode0
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
The Doctor Just Won't Accept That!0
Show:102550
← PrevPage 52 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified