SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 291300 of 537 papers

TitleStatusHype
Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)0
Reliability Scores from Saliency Map Clusters for Improved Image-based Harvest-Readiness Prediction in Cauliflower0
Rethinking Interpretability in the Era of Large Language Models0
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning0
Revealing the CO2 emission reduction of ridesplitting and its determinants based on real-world data0
Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images0
Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena0
Segmentation of Cardiac Structures via Successive Subspace Learning with Saab Transform from Cine MRI0
Selecting Interpretability Techniques for Healthcare Machine Learning models0
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures0
Show:102550
← PrevPage 30 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified