SOTAVerified

Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions

2023-05-09Code Available0· sign in to hype

Georg Siedel, Weijia Shao, Silvia Vock, Andrey Morozov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Robustness is a fundamental property of machine learning classifiers required to achieve safety and reliability. In the field of adversarial robustness of image classifiers, robustness is commonly defined as the stability of a model to all input changes within a p-norm distance. However, in the field of random corruption robustness, variations observed in the real world are used, while p-norm corruptions are rarely considered. This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers. We evaluate the model robustness against imperceptible random p-norm corruptions and propose a novel robustness metric. We empirically investigate whether robustness transfers across different p-norms and derive conclusions on which p-norm corruptions a model should be trained and evaluated. We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.

Tasks

Reproductions