SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 111120 of 537 papers

TitleStatusHype
Expert Study on Interpretable Machine Learning Models with Missing Data0
Data-driven Approach for Static Hedging of Exchange Traded Options0
Data-driven model reconstruction for nonlinear wave dynamics0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Decoding Urban-health Nexus: Interpretable Machine Learning Illuminates Cancer Prevalence based on Intertwined City Features0
Deducing neighborhoods of classes from a fitted model0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
A Learning Theoretic Perspective on Local Explainability0
Show:102550
← PrevPage 12 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified