SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 191200 of 537 papers

TitleStatusHype
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
Integrating White and Black Box Techniques for Interpretable Machine Learning0
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
GAMformer: In-Context Learning for Generalized Additive Models0
Cardiotocogram Biomedical Signal Classification and Interpretation for Fetal Health Evaluation0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Generalized Groves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
Explainable AI using expressive Boolean formulas0
Show:102550
← PrevPage 20 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified