SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 131140 of 537 papers

TitleStatusHype
Explaining Kernel Clustering via Decision Trees0
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach0
Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature0
Explaining Recurrent Neural Network Predictions in Sentiment Analysis0
Explanations for Automatic Speech Recognition0
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations0
CNNs for NLP in the Browser: Client-Side Deployment and Visualization Opportunities0
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq0
A Novel Tropical Geometry-based Interpretable Machine Learning Method: Application in Prognosis of Advanced Heart Failure0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
Show:102550
← PrevPage 14 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified