SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 141150 of 537 papers

TitleStatusHype
Neural Network Pruning by Gradient DescentCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local ExplanationsCode1
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods in psychiatry detection applications, specifically depression disorder: A Brief Review0
An Interpretable Machine Learning Framework to Understand Bikeshare Demand before and during the COVID-19 Pandemic in New York City0
The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods In Psychiatry Detection Applications, Specifically Depression Disorder: A Brief Review.0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Hidden Citations Obscure True Impact in Science0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
ML4EJ: Decoding the Role of Urban Features in Shaping Environmental Injustice Using Interpretable Machine Learning0
Show:102550
← PrevPage 15 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified