SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 341350 of 537 papers

TitleStatusHype
Leveraging advances in machine learning for the robust classification and interpretation of networks0
Who will dropout from university? Academic risk prediction based on interpretable machine learning0
XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change0
Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix)0
Greenhouse gases emissions: estimating corporate non-reported emissions using interpretable machine learning0
Hidden Citations Obscure True Impact in Science0
High-Throughput Computational Screening and Interpretable Machine Learning of Metal-organic Frameworks for Iodine Capture0
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis0
How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies0
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model0
Show:102550
← PrevPage 35 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified