SOTAVerified

Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Papers

Showing 491500 of 537 papers

TitleStatusHype
Regularizing Black-box Models for Improved InterpretabilityCode0
ProtoAttend: Attention-Based Prototypical LearningCode0
Modeling Heterogeneity in Mode-Switching Behavior Under a Mobility-on-Demand Transit System: An Interpretable Machine Learning Approach0
Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions0
Comparative Document Summarisation via ClassificationCode0
MLIC: A MaxSAT-Based framework for learning interpretable classification rulesCode0
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models InsteadCode0
YASENN: Explaining Neural Networks via Partitioning Activation Sequences0
Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing Style0
Towards making NLG a voice for interpretable Machine Learning0
Show:102550
← PrevPage 50 of 54Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Q-SENNTop 1 Accuracy85.9Unverified
2SLDD-ModelTop 1 Accuracy85.7Unverified