SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 251260 of 537 papers

TitleStatusHype
ControlBurn: Nonlinear Feature Selection with Sparse Tree EnsemblesCode1
An Additive Instance-Wise Approach to Multi-class Model InterpretationCode0
Linguistically inspired roadmap for building biologically reliable protein language models0
Interpretable machine learning optimization (InterOpt) for operational parameters: a case study of highly-efficient shale gas development0
Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena0
Improving Accuracy of Interpretability Measures in Hyperparameter Optimization via Bayesian Algorithm ExecutionCode1
Using Interpretable Machine Learning to Massively Increase the Number of Antibody-Virus Interactions Across Studies0
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and Powerful Approach Using Minipatch Ensembles0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
OmniXAI: A Library for Explainable AICode2
Show:102550
← PrevPage 26 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified