SmoothGrad: removing noise by adding noise
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/PAIR-code/saliencyOfficialtf★ 994
- github.com/shap/shaptf★ 25,171
- github.com/slundberg/shaptf★ 25,160
- github.com/pytorch/captumpytorch★ 5,583
- github.com/shaoshanglqy/shap-shapleytf★ 10
- github.com/miaolan-xie/shaptf★ 0
- github.com/austinbrown34/shaptf★ 0
- github.com/saivarunr/xshaptf★ 0
- github.com/idiap/fullgrad-saliencypytorch★ 0
- github.com/sicara/tf-explaintf★ 0
Abstract
Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.