SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 261270 of 537 papers

TitleStatusHype
Additive Higher-Order Factorization Machines0
Neural Basis Models for InterpretabilityCode1
Scalable Interpretability via PolynomialsCode1
Towards Better Understanding Attribution MethodsCode1
ExMo: Explainable AI Model using Inverse Frequency Decision Rules0
Pest presence prediction using interpretable machine learning0
Efficient Learning of Interpretable Classification Rules0
SIBILA: A novel interpretable ensemble of general-purpose machine learning models applied to medical contextsCode0
Interpretable Machine Learning for Self-Service High-Risk Decision-Making0
Insights into the origin of halo mass profiles from machine learning0
Show:102550
← PrevPage 27 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified