SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 181190 of 537 papers

TitleStatusHype
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
How to See Hidden Patterns in Metamaterials with Interpretable Machine LearningCode0
iNNvestigate neural networks!Code0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
Supervised Feature Compression based on Counterfactual AnalysisCode0
Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine LearningCode0
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous DatasetsCode0
GFN-SR: Symbolic Regression with Generative Flow NetworksCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Harnessing Interpretable Machine Learning for Holistic Inverse Design of OrigamiCode0
Show:102550
← PrevPage 19 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified