SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 361370 of 537 papers

TitleStatusHype
Towards Rigorous Interpretations: a Formalisation of Feature AttributionCode0
Grouped Feature Importance and Combined Features Effect PlotCode1
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structureCode0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography0
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges0
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion0
Interpretable Machine Learning: Moving From Mythos to Diagnostics0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Show:102550
← PrevPage 37 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified