SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 251300 of 537 papers

TitleStatusHype
Neuro-symbolic Models for Interpretable Time Series Classification using Temporal Logic Description0
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance0
Novel Topological Shapes of Model Interpretability0
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
On Explaining Decision Trees0
On Interpretability and Similarity in Concept-Based Machine Learning0
Online Product Feature Recommendations with Interpretable Machine Learning0
On quantitative aspects of model interpretability0
On the definition and importance of interpretability in scientific machine learning0
Look Who's Talking: Interpretable Machine Learning for Assessing Italian SMEs Credit Default0
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach0
On the Shape of Brainscores for Large Language Models (LLMs)0
On the Use of Interpretable Machine Learning for the Management of Data Quality0
Open Issues in Combating Fake News: Interpretability as an Opportunity0
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors0
OPTDTALS: Approximate Logic Synthesis via Optimal Decision Trees Approach0
Optimizing Binary Decision Diagrams with MaxSAT for classification0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
Overcoming Catastrophic Forgetting by XAI0
Parallel Coordinates for Discovery of Interpretable Machine Learning Models0
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning0
Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal-organic frameworks0
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification0
Pest presence prediction using interpretable machine learning0
Phononic materials with effectively scale-separated hierarchical features using interpretable machine learning0
Physically interpretable machine learning algorithm on multidimensional non-linear fields0
Predicting Hurricane Evacuation Decisions with Interpretable Machine Learning Models0
Predicting Many Crystal Properties via an Adaptive Transformer-based Framework0
Predicting Postoperative Stroke in Elderly SICU Patients: An Interpretable Machine Learning Model Using MIMIC Data0
Predicting Treatment Response in Body Dysmorphic Disorder with Interpretable Machine Learning0
Predictive learning via rule ensembles0
Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems0
Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning0
Quantifying and Learning Disentangled Representations with Limited Supervision0
Ranking Facts for Explaining Answers to Elementary Science Questions0
Rapid Shear Capacity Prediction of TRM-Strengthened Unreinforced Masonry Walls through Interpretable Machine Learning using a Web App0
Recent advances in interpretable machine learning using structure-based protein representations0
Reconstruction and analysis of negatively buoyant jets with interpretable machine learning0
Reducing Optimism Bias in Incomplete Cooperative Games0
Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)0
Reliability Scores from Saliency Map Clusters for Improved Image-based Harvest-Readiness Prediction in Cauliflower0
Rethinking Interpretability in the Era of Large Language Models0
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning0
Revealing the CO2 emission reduction of ridesplitting and its determinants based on real-world data0
Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images0
Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena0
Segmentation of Cardiac Structures via Successive Subspace Learning with Saab Transform from Cine MRI0
Selecting Interpretability Techniques for Healthcare Machine Learning models0
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures0
Show:102550
← PrevPage 6 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified