SOTAVerified

An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder

2020-09-17Code Available0· sign in to hype

Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm because of dimensionality reduction in the DNN models. We also show that Glow likelihood-based OOD detection is breakable as well.

Tasks

Reproductions