SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 91100 of 537 papers

TitleStatusHype
Efficient Exploration of the Rashomon Set of Rule Set ModelsCode0
Tensor Polynomial Additive Model0
Branches: Efficiently Seeking Optimal Sparse Decision Trees with AO*Code0
Learning Discrete Concepts in Latent Hierarchical Models0
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning0
Predicting Many Crystal Properties via an Adaptive Transformer-based Framework0
Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear RegressionCode0
Review of Interpretable Machine Learning Models for Disease Prognosis0
Biathlon: Harnessing Model Resilience for Accelerating ML Inference PipelinesCode0
Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?Code0
Show:102550
← PrevPage 10 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified