SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101110 of 537 papers

TitleStatusHype
On the definition and importance of interpretability in scientific machine learning0
Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques0
Understanding molecular ratios in the carbon and oxygen poor outer Milky Way with interpretable machine learning0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users0
Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models0
Towards Probabilistic Dynamic Security Assessment and Enhancement of Large Power Systems0
NFISiS: New Perspectives on Fuzzy Inference Systems for Renewable Energy ForecastingCode0
Interpretable machine learning-guided design of Fe-based soft magnetic alloys0
Causal rule ensemble approach for multi-arm data0
Show:102550
← PrevPage 11 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified