SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 276300 of 537 papers

TitleStatusHype
Interpretable Machine Learning Models for Modal Split Prediction in Transportation Systems0
Interpretable Machine Learning Models for the Digital Clock Drawing Test0
Interpretable machine learning of amino acid patterns in proteins: a statistical ensemble approach0
Interpretable machine learning optimization (InterOpt) for operational parameters: a case study of highly-efficient shale gas development0
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs0
Leveraging advances in machine learning for the robust classification and interpretation of networks0
Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing Style0
Interpretable Predictive Maintenance for Hard Drives0
Interpretable Reinforcement Learning with Ensemble Methods0
Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders0
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems0
Interpretable Two-level Boolean Rule Learning for Classification0
Interpreting a Machine Learning Model for Detecting Gravitational Waves0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
SkinCon: A skin disease dataset densely annotated by domain experts for fine-grained model debugging and analysis0
Adversarial Attacks and Defenses: An Interpretation Perspective0
Automation for Interpretable Machine Learning Through a Comparison of Loss Functions to Regularisers0
Interpreting Neural Ranking Models using Grad-CAM0
Exploring Interpretability for Predictive Process Analytics0
Investigating Role of Personal Factors in Shaping Responses to Active Shooter Incident using Machine Learning0
Automated Learning of Interpretable Models with Quantified Uncertainty0
Is Grad-CAM Explainable in Medical Images?0
An interpretable machine learning system for colorectal cancer diagnosis from pathology slides0
Advancing Tabular Stroke Modelling Through a Novel Hybrid Architecture and Feature-Selection Synergy0
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity0
Show:102550
← PrevPage 12 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified