SOTAVerified

Dynamic Interpretability for Model Comparison via Decision Rules

2023-09-29Code Available0· sign in to hype

Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively. This paper addresses the challenge of understanding and explaining differences between machine learning models, which is crucial for model selection, monitoring and lifecycle management in real-world applications. We propose DeltaXplainer, a model-agnostic method for generating rule-based explanations describing the differences between two binary classifiers. To assess the effectiveness of DeltaXplainer, we conduct experiments on synthetic and real-world datasets, covering various model comparison scenarios involving different types of concept drift.

Tasks

Reproductions