SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 431440 of 537 papers

TitleStatusHype
Offensive Language Detection ExplainedCode0
Revealing the Phase Diagram of Kitaev Materials by Machine Learning: Cooperation and Competition between Spin LiquidsCode0
Neural Additive Models: Interpretable Machine Learning with Neural NetsCode1
Adversarial Attacks and Defenses: An Interpretation Perspective0
From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning0
Understanding the decisions of CNNs: An in-model approachCode1
A machine learning methodology for real-time forecasting of the 2019-2020 COVID-19 outbreak using Internet searches, news alerts, and estimates from mechanistic modelsCode0
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
Ontology-based Interpretable Machine Learning for Textual DataCode0
Born-Again Tree EnsemblesCode1
Show:102550
← PrevPage 44 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified