SOTAVerified

Adversarial attacks hidden in plain sight

2019-02-25Code Available0· sign in to hype

Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual perception, we compose a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer. We carry out a user study on classifying adversarially modified images to validate the perceptual quality of our approach and find significant evidence for its concealment with regards to human visual perception.

Tasks

Reproductions