SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 110 of 537 papers

TitleStatusHype
Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis0
The Most Important Features in Generalized Additive Models Might Be Groups of Features0
Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images0
Leveraging Predictive Equivalence in Decision TreesCode0
Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders0
An Attention-based Spatio-Temporal Neural Operator for Evolving Physics0
An Interpretable Machine Learning Approach in Predicting Inflation Using Payments System Data: A Case Study of Indonesia0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
Predicting Postoperative Stroke in Elderly SICU Patients: An Interpretable Machine Learning Model Using MIMIC Data0
Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST1000
Show:102550
← PrevPage 1 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified