SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101125 of 537 papers

TitleStatusHype
On the definition and importance of interpretability in scientific machine learning0
Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques0
Understanding molecular ratios in the carbon and oxygen poor outer Milky Way with interpretable machine learning0
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian GeometryCode0
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users0
Attention Mechanisms in Dynamical Systems: A Case Study with Predator-Prey Models0
Towards Probabilistic Dynamic Security Assessment and Enhancement of Large Power Systems0
NFISiS: New Perspectives on Fuzzy Inference Systems for Renewable Energy ForecastingCode0
Interpretable machine learning-guided design of Fe-based soft magnetic alloys0
Causal rule ensemble approach for multi-arm data0
A Statistical Evaluation of Indoor LoRaWAN Environment-Aware Propagation for 6G: MLR, ANOVA, and Residual Distribution AnalysisCode0
Towards Simple Machine Learning Baselines for GNSS RFI Detection0
Interpretable Machine Learning in Physics: A Review0
Kernel Learning Assisted Synthesis Condition Exploration for Ternary SpinelCode0
Predicting Treatment Response in Body Dysmorphic Disorder with Interpretable Machine Learning0
XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change0
Predicting and Understanding College Student Mental Health with Interpretable Machine LearningCode0
Diagnostic-free onboard battery health assessment0
A Frank System for Co-Evolutionary Hybrid Decision-Making0
Near Optimal Decision Trees in a SPLIT Second0
An Interpretable Machine Learning Approach to Understanding the Relationships between Solar Flares and Source Active Regions0
Investigating Role of Personal Factors in Shaping Responses to Active Shooter Incident using Machine Learning0
Interpretable Machine Learning for Kronecker Coefficients0
Classifying the Stoichiometry of Virus-like Particles with Interpretable Machine LearningCode0
High-Throughput Computational Screening and Interpretable Machine Learning of Metal-organic Frameworks for Iodine Capture0
Show:102550
← PrevPage 5 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified