SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 501537 of 537 papers

TitleStatusHype
Learning Gradual Argumentation Frameworks using Genetic AlgorithmsCode0
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU PatientsCode0
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptionsCode0
Learning local discrete features in explainable-by-design convolutional neural networksCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
SIBILA: A novel interpretable ensemble of general-purpose machine learning models applied to medical contextsCode0
Contrastive Explanations with Local Foil TreesCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Feature-based Learning for Diverse and Privacy-Preserving Counterfactual ExplanationsCode0
Consistent Sparse Deep Learning: Theory and ComputationCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Leveraging Predictive Equivalence in Decision TreesCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
LLM-based feature generation from text for interpretable machine learningCode0
Conditional Feature Importance for Mixed DataCode0
Local Explanation of Dimensionality ReductionCode0
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning PredictionsCode0
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models InsteadCode0
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Predicting and Understanding College Student Mental Health with Interpretable Machine LearningCode0
Predicting crash injury severity in smart cities: a novel computational approach with wide and deep learning modelCode0
Socio-economic disparities and COVID-19 in the USACode0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Comparative Document Summarisation via ClassificationCode0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Manipulating and Measuring Model InterpretabilityCode0
An exact counterfactual-example-based approach to tree-ensemble models interpretabilityCode0
Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine LearningCode0
Margin Optimal Classification TreesCode0
Understanding Interventional TreeSHAP : How and Why it WorksCode0
An Additive Instance-Wise Approach to Multi-class Model InterpretationCode0
midr: Learning from Black-Box Models by Maximum Interpretation DecompositionCode0
Probing hidden spin order with interpretable machine learningCode0
Show:102550
← PrevPage 11 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified