SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 171180 of 537 papers

TitleStatusHype
Explainable Representation Learning of Small Quantum StatesCode0
Genomic Interpreter: A Hierarchical Genomic Deep Neural Network with 1D Shifted Window TransformerCode1
Explainable AI using expressive Boolean formulas0
Learning Transformer ProgramsCode1
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization0
Parallel Coordinates for Discovery of Interpretable Machine Learning Models0
Interpretable Machine Learning based on Functional ANOVA Framework: Algorithms and Comparisons0
Reliability Scores from Saliency Map Clusters for Improved Image-based Harvest-Readiness Prediction in Cauliflower0
A Novel Memetic Strategy for Optimized Learning of Classification Trees0
Show:102550
← PrevPage 18 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified