SOTAVerified

Demystifying Deep Neural Networks Through Interpretation: A Survey

2020-12-13Unverified0· sign in to hype

Giang Dao, Minwoo Lee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn. The problem is that the single metric is an incomplete description of the real world tasks. The single metric cannot explain why the algorithm learn. When an erroneous happens, the lack of interpretability causes a hardness of understanding and fixing the error. Recently, there are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process. The works are important to identify potential bias and to ensure algorithm fairness as well as expected performance.

Tasks

Reproductions