SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 511520 of 537 papers

TitleStatusHype
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource SettingsCode0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?Code0
Leveraging Predictive Equivalence in Decision TreesCode0
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer InformationCode0
LLM-based feature generation from text for interpretable machine learningCode0
Conditional Feature Importance for Mixed DataCode0
Local Explanation of Dimensionality ReductionCode0
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning PredictionsCode0
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models InsteadCode0
Show:102550
← PrevPage 52 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified