SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 5175 of 537 papers

TitleStatusHype
ContrXT: Generating Contrastive Explanations from any Text ClassifierCode1
Mixture of Decision Trees for Interpretable Machine LearningCode1
ControlBurn: Nonlinear Feature Selection with Sparse Tree EnsemblesCode1
Modern Hopfield Networks and Attention for Immune Repertoire ClassificationCode1
Neural Prototype Trees for Interpretable Fine-grained Image RecognitionCode1
Neurosymbolic Association Rule Mining from Tabular DataCode1
Development of Interpretable Machine Learning Models to Detect Arrhythmia based on ECG DataCode1
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
A Unified Approach to Interpreting Model PredictionsCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Q-SENN: Quantized Self-Explaining Neural NetworksCode1
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc InterpretabilityCode1
Disentangled Attribution Curves for Interpreting Random Forests and Boosted TreesCode1
FastMapSVM: Classifying Complex Objects Using the FastMap Algorithm and Support-Vector MachinesCode1
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Axiomatic Attribution for Deep NetworksCode1
DISSECT: Disentangled Simultaneous Explanations via Concept TraversalsCode1
Do Feature Attribution Methods Correctly Attribute Features?Code1
ExeKGLib: Knowledge Graphs-Empowered Machine Learning AnalyticsCode1
Explainable Diabetic Retinopathy Detection and Retinal Image GenerationCode1
GAM Changer: Editing Generalized Additive Models with Interactive VisualizationCode1
Interpretable machine learning for time-to-event prediction in medicine and healthcareCode1
Born-Again Tree EnsemblesCode1
Generalized and Scalable Optimal Sparse Decision TreesCode1
Improving performance of deep learning models with axiomatic attribution priors and expected gradientsCode1
Show:102550
← PrevPage 3 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified