SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 7180 of 537 papers

TitleStatusHype
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning0
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification0
Subgroup Analysis via Model-based Rule Forest0
Neural-ANOVA: Model Decomposition for Interpretable Machine Learning0
OPTDTALS: Approximate Logic Synthesis via Optimal Decision Trees Approach0
Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification0
Advances in Multiple Instance Learning for Whole Slide Image Analysis: Techniques, Challenges, and Future Directions0
Phononic materials with effectively scale-separated hierarchical features using interpretable machine learning0
META-ANOVA: Screening interactions for interpretable machine learning0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Show:102550
← PrevPage 8 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified