SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 1120 of 537 papers

TitleStatusHype
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and InterpretationCode1
Graph Learning for Numeric PlanningCode1
LLM-SR: Scientific Equation Discovery via Programming with Large Language ModelsCode1
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive LearningCode1
Interpretable Machine Learning for TabPFNCode1
Q-SENN: Quantized Self-Explaining Neural NetworksCode1
TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceCode1
Generative Inverse Design of Metamaterials with Functional Responses by Interpretable LearningCode1
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionCode1
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local ExplanationsCode1
Show:102550
← PrevPage 2 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified