SOTAVerified

A Fresh Look at Sanity Checks for Saliency Maps

2024-05-03Code Available1· sign in to hype

Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina Höhne

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The Model Parameter Randomisation Test (MPRT) is highly recognised in the eXplainable Artificial Intelligence (XAI) community due to its fundamental evaluative criterion: explanations should be sensitive to the parameters of the model they seek to explain. However, recent studies have raised several methodological concerns for the empirical interpretation of MPRT. In response, we propose two modifications to the original test: Smooth MPRT and Efficient MPRT. The former reduces the impact of noise on evaluation outcomes via sampling, while the latter avoids the need for biased similarity measurements by re-interpreting the test through the increase in explanation complexity after full model randomisation. Our experiments show that these modifications enhance the metric reliability, facilitating a more trustworthy deployment of explanation methods.

Tasks

Reproductions