SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 101125 of 537 papers

TitleStatusHype
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts0
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain0
Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins0
A Semiparametric Approach to Interpretable Machine Learning0
Advances in Multiple Instance Learning for Whole Slide Image Analysis: Techniques, Challenges, and Future Directions0
A Scalable Inference Method For Large Dynamic Economic Systems0
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning0
Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST1000
Explainable AI using expressive Boolean formulas0
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More0
Strategizing University Rank Improvement using Interpretable Machine Learning and Data Visualization0
Data-driven Approach for Static Hedging of Exchange Traded Options0
Data-driven model reconstruction for nonlinear wave dynamics0
Data Model Design for Explainable Machine Learning-based Electricity Applications0
Data Representing Ground-Truth Explanations to Evaluate XAI Methods0
Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning0
Decoding Urban-health Nexus: Interpretable Machine Learning Illuminates Cancer Prevalence based on Intertwined City Features0
Deducing neighborhoods of classes from a fitted model0
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life0
A Learning Theoretic Perspective on Local Explainability0
Detecting Heterogeneous Treatment Effect with Instrumental Variables0
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach0
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models0
Expert Study on Interpretable Machine Learning Models with Missing Data0
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey0
Show:102550
← PrevPage 5 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified