SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 211220 of 537 papers

TitleStatusHype
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics0
Modelling wildland fire burn severity in California using a spatial Super Learner approachCode0
Neural Network Pruning by Gradient DescentCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods in psychiatry detection applications, specifically depression disorder: A Brief Review0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods In Psychiatry Detection Applications, Specifically Depression Disorder: A Brief Review.0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Show:102550
← PrevPage 22 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified