SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101125 of 537 papers

TitleStatusHype
On the Shape of Brainscores for Large Language Models (LLMs)0
Mathematics of statistical sequential decision-making: concentration, risk-awareness and modelling in stochastic bandits, with applications to bariatric surgery0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
LLM-SR: Scientific Equation Discovery via Programming with Large Language ModelsCode1
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Online Learning of Decision Trees with Thompson SamplingCode0
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive LearningCode1
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Interpretable Machine Learning for Weather and Climate Prediction: A Survey0
Leveraging advances in machine learning for the robust classification and interpretation of networks0
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures0
Interpretable Machine Learning for TabPFNCode1
Interpretable Machine Learning for Survival AnalysisCode0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable Machine LearningCode0
LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable Machine Learning Models0
Explaining Kernel Clustering via Decision Trees0
Large Language Model-Based Interpretable Machine Learning Control in Building Energy Systems0
Challenges in Variable Importance Ranking Under Correlation0
Reducing Optimism Bias in Incomplete Cooperative Games0
Rethinking Interpretability in the Era of Large Language ModelsCode0
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic RegressionCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Show:102550
← PrevPage 5 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified