SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 221230 of 537 papers

TitleStatusHype
Hidden Citations Obscure True Impact in Science0
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case StudyCode0
ML4EJ: Decoding the Role of Urban Features in Shaping Environmental Injustice Using Interpretable Machine Learning0
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
Causal Entropy and Information Gain for Measuring Causal Control0
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors0
Measuring, Interpreting, and Improving Fairness of Algorithms using Causal Inference and Randomized Experiments0
Expanding Mars Climate Modeling: Interpretable Machine Learning for Modeling MSL Relative Humidity0
Show:102550
← PrevPage 23 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified