DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks
Patrik Velčický, Jakub Breier, Mladen Kovačević, Xiaolu Hou
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Fault injection attacks are a potent threat against embedded implementations of neural network models. Several attack vectors have been proposed, such as misclassification, model extraction, and trojan/backdoor planting. Most of these attacks work by flipping bits in the memory where quantized model parameters are stored. In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode. We experimentally evaluate our proposal with several publicly available models and datasets, by using state-of-the-art bit-flip attacks: BFA, T-BFA, and TA-LBF. Our results show an increase in protection margin of up to 7.6 for 4-bit and 12.4 for 8-bit quantized networks. Memory overheads start at 50\% of the original network size, while the time overheads are negligible. Moreover, DeepNcode does not require retraining and does not change the original accuracy of the model.