SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 451460 of 537 papers

TitleStatusHype
Efficient Learning of Interpretable Classification Rules0
A Maritime Industry Experience for Vessel Operational Anomaly Detection: Utilizing Deep Learning Augmented with Lightweight Interpretable Models0
Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification0
Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques0
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition0
Deducing neighborhoods of classes from a fitted model0
Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data0
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning0
Dissecting the explanatory power of ESG features on equity returns by sector, capitalization, and year with interpretable machine learning0
Establishing Nationwide Power System Vulnerability Index across US Counties Using Interpretable Machine Learning0
Show:102550
← PrevPage 46 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified