SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 301310 of 537 papers

TitleStatusHype
Self-service Data Classification Using Interactive Visualization and Interpretable Machine Learning0
Sequencing Silicates in the IRS Debris Disk Catalog I: Methodology for Unsupervised Clustering0
Severity and Mortality Prediction Models to Triage Indian COVID-19 Patients0
Shapley variable importance cloud for machine learning models0
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations0
SkinCon: A skin disease dataset densely annotated by domain experts for fine-grained model debugging and analysis0
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity0
Structural Node Embeddings with Homomorphism Counts0
Subgroup Analysis via Model-based Rule Forest0
SynHING: Synthetic Heterogeneous Information Network Generation for Graph Learning and Explanation0
Show:102550
← PrevPage 31 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified