SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 321330 of 537 papers

TitleStatusHype
Who will dropout from university? Academic risk prediction based on interpretable machine learning0
SynHING: Synthetic Heterogeneous Information Network Generation for Graph Learning and Explanation0
Using an interpretable Machine Learning approach to study the drivers of International Migration0
A Scalable Inference Method For Large Dynamic Economic Systems0
Advances in Multiple Instance Learning for Whole Slide Image Analysis: Techniques, Challenges, and Future Directions0
Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics0
Longitudinal Distance: Towards Accountable Instance Attribution0
Techniques for Interpretable Machine Learning0
Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification0
Machine learning and Topological data analysis identify unique features of human papillae in 3D scans0
Show:102550
← PrevPage 33 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified