SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 521530 of 537 papers

TitleStatusHype
Loss-Optimal Classification Trees: A Generalized Framework and the Logistic CaseCode0
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtypeCode0
Bayesian Learning-Based Adaptive Control for Safety Critical SystemsCode0
Predicting and Understanding College Student Mental Health with Interpretable Machine LearningCode0
Predicting crash injury severity in smart cities: a novel computational approach with wide and deep learning modelCode0
Socio-economic disparities and COVID-19 in the USACode0
A Decision-Theoretic Approach for Model Interpretability in Bayesian FrameworkCode0
Comparative Document Summarisation via ClassificationCode0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Manipulating and Measuring Model InterpretabilityCode0
Show:102550
← PrevPage 53 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified