SOTAVerified

Are models trained on temporally-continuous data streams more adversarially robust?

2021-10-12NeurIPS Workshop SVRHM 2021Unverified0· sign in to hype

Nathan Kong, Anthony Norcia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Task-optimized convolutional neural networks are the most quantitatively accurate models of the primate visual system. Unlike humans, however, these models can easily be fooled by modifying their inputs with human-imperceptible image perturbations, resulting in poor adversarial robustness. Prior work showed that modifying a model's training objective or its architecture can improve its adversarial robustness. Another ingredient in building computational models of sensory cortex is the training dataset and, to our knowledge, its effect on a model's adversarial robustness has not be investigated. Motivated by observations that chicks develop more invariant visual representations with more temporally-continuous visual experience, we here evaluate a model's adversarial robustness when it is trained on a more naturalistic dataset---a longitudinal video dataset collected from the perspective of infants (SAYCam; Sullivan et al., 2020). By evaluating the adversarial robustness of models on 26-way classification of a set of annotated video frames from this dataset, we find that models that have been pre-trained on SAYCam video frames are more robust than those that have been pre-trained on ImageNet. Our results suggest that to build models that are adversarially robust, additional efforts should be made in curating datasets that are more similar to the natural image sequences and the visual experience infants receive.

Tasks

Reproductions