SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 111120 of 537 papers

TitleStatusHype
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature InteractionsCode0
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their InterpretationsCode0
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAICode0
Efficient and quantum-adaptive machine learning with fermion neural networksCode0
Comparative Document Summarisation via ClassificationCode0
Comorbid anxiety predicts lower odds of depression improvement during smartphone-delivered psychotherapyCode0
COLOGNE: Coordinated Local Graph Neighborhood SamplingCode0
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Gaining Free or Low-Cost Transparency with Interpretable Partial SubstituteCode0
Show:102550
← PrevPage 12 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified