SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 526537 of 537 papers

TitleStatusHype
Classification of Skin Cancer Images using Convolutional Neural Networks0
Segmentation of Cardiac Structures via Successive Subspace Learning with Saab Transform from Cine MRI0
High-Throughput Computational Screening and Interpretable Machine Learning of Metal-organic Frameworks for Iodine Capture0
Challenges in Variable Importance Ranking Under Correlation0
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis0
Causal rule ensemble approach for multi-arm data0
Causality Learning: A New Perspective for Interpretable Machine Learning0
How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies0
Selecting Interpretability Techniques for Healthcare Machine Learning models0
Understanding molecular ratios in the carbon and oxygen poor outer Milky Way with interpretable machine learning0
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures0
Self-service Data Classification Using Interactive Visualization and Interpretable Machine Learning0
Show:102550
← PrevPage 22 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified