SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 151160 of 537 papers

TitleStatusHype
Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning0
Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions0
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs0
Causal Entropy and Information Gain for Measuring Causal Control0
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors0
Measuring, Interpreting, and Improving Fairness of Algorithms using Causal Inference and Randomized Experiments0
Expanding Mars Climate Modeling: Interpretable Machine Learning for Modeling MSL Relative Humidity0
Development and validation of an interpretable machine learning-based calculator for predicting 5-year weight trajectories after bariatric surgery: a multinational retrospective cohort SOPHIA study0
Structural Node Embeddings with Homomorphism Counts0
Hyperspectral Blind Unmixing using a Double Deep Image PriorCode0
Show:102550
← PrevPage 16 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified