SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 211220 of 537 papers

TitleStatusHype
Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis0
Interpretability of machine learning based prediction models in healthcare0
Explainable AI Enabled Inspection of Business Process Prediction Models0
High-Throughput Computational Screening and Interpretable Machine Learning of Metal-organic Frameworks for Iodine Capture0
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence0
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis0
A hybrid machine learning framework for analyzing human decision making through learning preferences0
Expert Study on Interpretable Machine Learning Models with Missing Data0
A comprehensive interpretable machine learning framework for Mild Cognitive Impairment and Alzheimer's disease diagnosis0
Interpretability and Explainability: A Machine Learning Zoo Mini-tour0
Show:102550
← PrevPage 22 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified