SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 461470 of 537 papers

TitleStatusHype
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning0
Decoding Urban-health Nexus: Interpretable Machine Learning Illuminates Cancer Prevalence based on Intertwined City Features0
ExMo: Explainable AI Model using Inverse Frequency Decision Rules0
Expanding Mars Climate Modeling: Interpretable Machine Learning for Modeling MSL Relative Humidity0
Expert Study on Interpretable Machine Learning Models with Missing Data0
Achieving interpretable machine learning by functional decomposition of black-box models into explainable predictor effects0
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence0
Explainable AI Enabled Inspection of Business Process Prediction Models0
Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST1000
Explainable AI using expressive Boolean formulas0
Show:102550
← PrevPage 47 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified