SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 1120 of 537 papers

TitleStatusHype
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Interpretable Machine Learning for Macro Alpha: A News Sentiment Case Study0
Are machine learning interpretations reliable? A stability study on global interpretations0
Machine Learning-Based Prediction of Mortality in Geriatric Traumatic Brain Injury Patients0
Advancing Tabular Stroke Modelling Through a Novel Hybrid Architecture and Feature-Selection Synergy0
On the definition and importance of interpretability in scientific machine learning0
Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques0
Understanding molecular ratios in the carbon and oxygen poor outer Milky Way with interpretable machine learning0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users0
Show:102550
← PrevPage 2 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified