SOTAVerified

Neural Representations Reveal Distinct Modes of Class Fitting in Residual Convolutional Networks

2022-12-01Code Available0· sign in to hype

Michał Jamroż, Marcin Kurdziel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We leverage probabilistic models of neural representations to investigate how residual networks fit classes. To this end, we estimate class-conditional density models for representations learned by deep ResNets. We then use these models to characterize distributions of representations across learned classes. Surprisingly, we find that classes in the investigated models are not fitted in an uniform way. On the contrary: we uncover two groups of classes that are fitted with markedly different distributions of representations. These distinct modes of class-fitting are evident only in the deeper layers of the investigated models, indicating that they are not related to low-level image features. We show that the uncovered structure in neural representations correlate with memorization of training examples and adversarial robustness. Finally, we compare class-conditional distributions of neural representations between memorized and typical examples. This allows us to uncover where in the network structure class labels arise for memorized and standard inputs.

Tasks

Reproductions