SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 501525 of 537 papers

TitleStatusHype
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU PatientsCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Learning local discrete features in explainable-by-design convolutional neural networksCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
SIBILA: A novel interpretable ensemble of general-purpose machine learning models applied to medical contextsCode0
Contrastive Explanations with Local Foil TreesCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Feature-based Learning for Diverse and Privacy-Preserving Counterfactual ExplanationsCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Leveraging Predictive Equivalence in Decision TreesCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
LLM-based feature generation from text for interpretable machine learningCode0
Conditional Feature Importance for Mixed DataCode0
Local Explanation of Dimensionality ReductionCode0
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning PredictionsCode0
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models InsteadCode0
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Predicting and Understanding College Student Mental Health with Interpretable Machine LearningCode0
Predicting crash injury severity in smart cities: a novel computational approach with wide and deep learning modelCode0
Show:102550
← PrevPage 21 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified