SOTAVerified

Saliency Maps are Ambiguous: Analysis of Logical Relations on First and Second Order Attributions

2025-01-23Code Available0· sign in to hype

Leonid Schwenke, Martin Atzmueller

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work uncovered potential flaws in attribution or heatmap based saliency methods. A typical flaw is a confirmations bias, where the scores are compared to human expectation. Since measuring the quality of saliency methods is hard due to missing ground truth model reasoning, finding general limitations is also hard. This is further complicated, because masking-based evaluation on complex data can easily introduce a bias, as most methods cannot fully ignore inputs. In this work, we extend our previous analysis on the logical dataset framework ANDOR, where we showed that all analysed saliency methods fail to grasp all needed classification information for all possible scenarios. Specifically, this paper extends our previous work using analysis on more datasets, in order to better understand in which scenarios the saliency methods fail. Further, we apply the Global Coherence Representation as an additional evaluation method in order to enable actual input omission.

Tasks

Reproductions