SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 2130 of 537 papers

TitleStatusHype
Interpreting and Correcting Medical Image Classification with PIP-NetCode1
Genomic Interpreter: A Hierarchical Genomic Deep Neural Network with 1D Shifted Window TransformerCode1
Learning Transformer ProgramsCode1
ExeKGLib: Knowledge Graphs-Empowered Machine Learning AnalyticsCode1
Take 5: Interpretable Image Classification with a Handful of FeaturesCode1
Interpretable machine learning for time-to-event prediction in medicine and healthcareCode1
Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitisCode1
Structural Neural Additive Models: Enhanced Interpretable Machine LearningCode1
Learning Support and Trivial Prototypes for Interpretable Image ClassificationCode1
Mixture of Decision Trees for Interpretable Machine LearningCode1
Show:102550
← PrevPage 3 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified