SOTAVerified

Faithful Counterfactual Visual Explanations (FCVE)

2025-01-12Unverified0· sign in to hype

Bismillah Khan, SYED ALI TARIQ, Tehseen Zia, Muhammad Ahsan, David Windridge

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning models in computer vision have made remarkable progress, but their lack of transparency and interpretability remains a challenge. The development of explainable AI can enhance the understanding and performance of these models. However, existing techniques often struggle to provide convincing explanations that non-experts easily understand, and they cannot accurately identify models' intrinsic decision-making processes. To address these challenges, we propose to develop a counterfactual explanation (CE) model that balances plausibility and faithfulness. This model generates easy-to-understand visual explanations by making minimum changes necessary in images without altering the pixel data. Instead, the proposed method identifies internal concepts and filters learned by models and leverages them to produce plausible counterfactual explanations. The provided explanations reflect the internal decision-making process of the model, thus ensuring faithfulness to the model.

Tasks

Reproductions