SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 7180 of 537 papers

TitleStatusHype
Learning Support and Trivial Prototypes for Interpretable Image ClassificationCode1
Learning Game-Theoretic Models of Multiagent Trajectories Using Implicit LayersCode1
Born-Again Tree EnsemblesCode1
Interpretable machine learning: definitions, methods, and applicationsCode1
Interpretable Machine Learning for TabPFNCode1
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Interpreting and Correcting Medical Image Classification with PIP-NetCode1
Interpreting Machine Learning Models for Room Temperature Prediction in Non-domestic BuildingsCode1
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and InterpretationCode1
Towards Better Understanding Attribution MethodsCode1
Show:102550
← PrevPage 8 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified