SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101110 of 537 papers

TitleStatusHype
On the Shape of Brainscores for Large Language Models (LLMs)0
Mathematics of statistical sequential decision-making: concentration, risk-awareness and modelling in stochastic bandits, with applications to bariatric surgery0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
LLM-SR: Scientific Equation Discovery via Programming with Large Language ModelsCode1
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Online Learning of Decision Trees with Thompson SamplingCode0
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive LearningCode1
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Show:102550
← PrevPage 11 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified