SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 491500 of 537 papers

TitleStatusHype
Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?Code0
Is it Fake? News Disinformation Detection on South African News WebsitesCode0
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf ValuesCode0
Kernel Learning Assisted Synthesis Condition Exploration for Ternary SpinelCode0
PANTHER: Pathway Augmented Nonnegative Tensor factorization for HighER-order feature learningCode0
The (Un)reliability of saliency methodsCode0
Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devicesCode0
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon SetCode0
Perceptual Musical Features for Interpretable Audio TaggingCode0
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island CoevolutionCode0
Show:102550
← PrevPage 50 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified