SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 121130 of 537 papers

TitleStatusHype
Challenges in Variable Importance Ranking Under Correlation0
Reducing Optimism Bias in Incomplete Cooperative Games0
Rethinking Interpretability in the Era of Large Language Models0
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic RegressionCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Interactive Mars Image Content-Based Search with Interpretable Machine Learning0
X Hacking: The Threat of Misguided AutoMLCode0
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
SynHING: Synthetic Heterogeneous Information Network Generation for Graph Learning and Explanation0
Show:102550
← PrevPage 13 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified