SOTAVerified

Towards Best Practice in Explaining Neural Network Decisions with LRP

2019-10-22Code Available0· sign in to hype

Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Within the last decade, neural network based predictors have demonstrated impressive - and at times super-human - capabilities. This performance is often paid for with an intransparent prediction process and thus has sparked numerous contributions in the novel field of explainable artificial intelligence (XAI). In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP). Since its initial proposition LRP has evolved as a method, and a best practice for applying the method has tacitly emerged, based however on humanly observed evidence alone. In this paper we investigate - and for the first time quantify - the effect of this current best practice on feedforward neural networks in a visual object detection setting. The results verify that the layer-dependent approach to LRP applied in recent literature better represents the model's reasoning, and at the same time increases the object localization and class discriminativity of LRP.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PASCAL VOC 2012LRPCMP:a2+MAP42.1Unverified
PASCAL VOC 2012LRPCMP:a1+MAP34.66Unverified
SIXrayLRPz1 in 10 R@50.01Unverified

Reproductions