SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 91100 of 537 papers

TitleStatusHype
An Attention-based Spatio-Temporal Neural Operator for Evolving Physics0
An Interpretable Machine Learning Approach in Predicting Inflation Using Payments System Data: A Case Study of Indonesia0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
Predicting Postoperative Stroke in Elderly SICU Patients: An Interpretable Machine Learning Model Using MIMIC Data0
Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST1000
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Interpretable Machine Learning for Macro Alpha: A News Sentiment Case Study0
Are machine learning interpretations reliable? A stability study on global interpretations0
Machine Learning-Based Prediction of Mortality in Geriatric Traumatic Brain Injury Patients0
Advancing Tabular Stroke Modelling Through a Novel Hybrid Architecture and Feature-Selection Synergy0
Show:102550
← PrevPage 10 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified