DiffLIME: Enhancing Explainability with a Diffusion-Based LIME Algorithm for Fault Diagnosis
David Solis-Martin, Juan Galan-Paez, Joaquin Borrego-Diaz
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/dasolma/diffLIMEIn papernone★ 1
Abstract
The aim of predictive maintenance within the field of Prognostics and Health Management (PHM) is to identify and anticipate potential issues in equipment before they become serious. Deep learning models, such as deep convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and transformers, have been widely adopted for this task, achieving significant success. However, these models are often considered “black boxes” due to their opaque decision-making processes, making it challenging to explain their outputs to stakeholders, such as industrial equipment experts. The complexity and large number of parameters in these models further complicate the understanding of their predictions. This paper presents a novel explainable AI algorithm that extends the well-known Local Interpretable Model-agnostic Explanations (LIME). Our approach utilizes a conditioned probabilistic diffusion model to generate altered samples in the neighborhood of the source sample. We validate our method using various rotating machinery diagnosis datasets. Additionally, we compare our method against LIME, employing a set of metrics to quantify desirable properties of any explainable AI approach. The results highlight that DiffLIME consistently outperforms LIME in terms of coherence and stability while maintaining comparable performance in the selectivity metric. Moreover, the ability of DiffLIME to incorporate domain-specific meta-attributes, such as frequency components and signal envelopes, significantly enhances its explainability in the context of fault diagnosis. This approach provides more precise and meaningful insights into the predictions made by the model.