SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 201210 of 537 papers

TitleStatusHype
Rethinking Interpretability in the Era of Large Language Models0
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic RegressionCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Interactive Mars Image Content-Based Search with Interpretable Machine Learning0
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition0
X Hacking: The Threat of Misguided AutoMLCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
SynHING: Synthetic Heterogeneous Information Network Generation for Graph Learning and Explanation0
A Maritime Industry Experience for Vessel Operational Anomaly Detection: Utilizing Deep Learning Augmented with Lightweight Interpretable Models0
Perceptual Musical Features for Interpretable Audio TaggingCode0
Show:102550
← PrevPage 21 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified