SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 521530 of 537 papers

TitleStatusHype
Greenhouse gases emissions: estimating corporate non-reported emissions using interpretable machine learning0
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data0
A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations0
Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena0
Hidden Citations Obscure True Impact in Science0
Classification of Skin Cancer Images using Convolutional Neural Networks0
Segmentation of Cardiac Structures via Successive Subspace Learning with Saab Transform from Cine MRI0
High-Throughput Computational Screening and Interpretable Machine Learning of Metal-organic Frameworks for Iodine Capture0
Challenges in Variable Importance Ranking Under Correlation0
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis0
Show:102550
← PrevPage 53 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified