SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 471480 of 537 papers

TitleStatusHype
A hybrid machine learning framework for analyzing human decision making through learning preferences0
Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)0
Disentangled Attribution Curves for Interpreting Random Forests and Boosted TreesCode1
Fast classification of small X-ray diffraction datasets using data augmentation and deep neural networksCode0
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model0
Full-Gradient Representation for Neural Network VisualizationCode0
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc InterpretabilityCode1
Visualization of Convolutional Neural Networks for Monocular Depth EstimationCode0
Open Issues in Combating Fake News: Interpretability as an Opportunity0
Re-Ranking Words to Improve Interpretability of Automatically Generated TopicsCode0
Show:102550
← PrevPage 48 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified