SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 5175 of 537 papers

TitleStatusHype
ExeKGLib: Knowledge Graphs-Empowered Machine Learning AnalyticsCode1
Neural Additive Models: Interpretable Machine Learning with Neural NetsCode1
Improving Accuracy of Interpretability Measures in Hyperparameter Optimization via Bayesian Algorithm ExecutionCode1
Neural Prototype Trees for Interpretable Fine-grained Image RecognitionCode1
Explainable Diabetic Retinopathy Detection and Retinal Image GenerationCode1
FastMapSVM: Classifying Complex Objects Using the FastMap Algorithm and Support-Vector MachinesCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
GAM Changer: Editing Generalized Additive Models with Interactive VisualizationCode1
A Unified Approach to Interpreting Model PredictionsCode1
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local ExplanationsCode1
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray ImagesCode1
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc InterpretabilityCode1
Generalized and Scalable Optimal Sparse Decision TreesCode1
Interpretable machine learning for time-to-event prediction in medicine and healthcareCode1
Interpreting and Correcting Medical Image Classification with PIP-NetCode1
Axiomatic Attribution for Deep NetworksCode1
Generative Inverse Design of Metamaterials with Functional Responses by Interpretable LearningCode1
Genomic Interpreter: A Hierarchical Genomic Deep Neural Network with 1D Shifted Window TransformerCode1
Graph Learning for Numeric PlanningCode1
BreastScreening: On the Use of Multi-Modality in Medical Imaging DiagnosisCode1
How Interpretable and Trustworthy are GAMs?Code1
Fast Sparse Decision Tree Optimization via Reference EnsemblesCode1
Born-Again Tree EnsemblesCode1
Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitisCode1
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals PredictionCode1
Show:102550
← PrevPage 3 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified