SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 131140 of 537 papers

TitleStatusHype
A Maritime Industry Experience for Vessel Operational Anomaly Detection: Utilizing Deep Learning Augmented with Lightweight Interpretable Models0
TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceCode1
Q-SENN: Quantized Self-Explaining Neural NetworksCode1
Perceptual Musical Features for Interpretable Audio TaggingCode0
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning0
Generative Inverse Design of Metamaterials with Functional Responses by Interpretable LearningCode1
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionCode1
Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics0
Modelling wildland fire burn severity in California using a spatial Super Learner approachCode0
Show:102550
← PrevPage 14 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified