SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101150 of 537 papers

TitleStatusHype
On the Shape of Brainscores for Large Language Models (LLMs)0
Mathematics of statistical sequential decision-making: concentration, risk-awareness and modelling in stochastic bandits, with applications to bariatric surgery0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
LLM-SR: Scientific Equation Discovery via Programming with Large Language ModelsCode1
Feature graphs for interpretable unsupervised tree ensembles: centrality, interaction, and application in disease subtyping0
Online Learning of Decision Trees with Thompson SamplingCode0
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive LearningCode1
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Explainable Deep Learning: A Visual Analytics Approach with Transition MatricesCode0
Interpretable Machine Learning for Weather and Climate Prediction: A Survey0
Leveraging advances in machine learning for the robust classification and interpretation of networks0
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures0
Interpretable Machine Learning for TabPFNCode1
Interpretable Machine Learning for Survival AnalysisCode0
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data0
Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable Machine LearningCode0
LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable Machine Learning Models0
Explaining Kernel Clustering via Decision Trees0
Large Language Model-Based Interpretable Machine Learning Control in Building Energy Systems0
Challenges in Variable Importance Ranking Under Correlation0
Reducing Optimism Bias in Incomplete Cooperative Games0
Rethinking Interpretability in the Era of Large Language ModelsCode0
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic RegressionCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Interactive Mars Image Content-Based Search with Interpretable Machine Learning0
X Hacking: The Threat of Misguided AutoMLCode0
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
SynHING: Synthetic Heterogeneous Information Network Generation for Graph Learning and Explanation0
A Maritime Industry Experience for Vessel Operational Anomaly Detection: Utilizing Deep Learning Augmented with Lightweight Interpretable Models0
TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceCode1
Q-SENN: Quantized Self-Explaining Neural NetworksCode1
Perceptual Musical Features for Interpretable Audio TaggingCode0
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning0
Generative Inverse Design of Metamaterials with Functional Responses by Interpretable LearningCode1
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionCode1
Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics0
Modelling wildland fire burn severity in California using a spatial Super Learner approachCode0
Neural Network Pruning by Gradient DescentCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local ExplanationsCode1
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods in psychiatry detection applications, specifically depression disorder: A Brief Review0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods In Psychiatry Detection Applications, Specifically Depression Disorder: A Brief Review.0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Hidden Citations Obscure True Impact in Science0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
ML4EJ: Decoding the Role of Urban Features in Shaping Environmental Injustice Using Interpretable Machine Learning0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified