SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 251275 of 537 papers

TitleStatusHype
Neuro-symbolic Models for Interpretable Time Series Classification using Temporal Logic Description0
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance0
Novel Topological Shapes of Model Interpretability0
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models0
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency0
On Explaining Decision Trees0
On Interpretability and Similarity in Concept-Based Machine Learning0
Online Product Feature Recommendations with Interpretable Machine Learning0
On quantitative aspects of model interpretability0
On the definition and importance of interpretability in scientific machine learning0
Look Who's Talking: Interpretable Machine Learning for Assessing Italian SMEs Credit Default0
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach0
On the Shape of Brainscores for Large Language Models (LLMs)0
On the Use of Interpretable Machine Learning for the Management of Data Quality0
Open Issues in Combating Fake News: Interpretability as an Opportunity0
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors0
OPTDTALS: Approximate Logic Synthesis via Optimal Decision Trees Approach0
Optimizing Binary Decision Diagrams with MaxSAT for classification0
Out-of-Distribution Detection of Melanoma using Normalizing Flows0
Overcoming Catastrophic Forgetting by XAI0
Parallel Coordinates for Discovery of Interpretable Machine Learning Models0
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning0
Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal-organic frameworks0
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification0
Pest presence prediction using interpretable machine learning0
Show:102550
← PrevPage 11 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified