H-NeXt: The next step towards roto-translation invariant networks
Tomas Karella, Filip Sroubek, Jan Flusser, Jan Blazek, Vasek Kosik
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/karellat/h-nextOfficialIn paperpytorch★ 3
Abstract
The widespread popularity of equivariant networks underscores the significance of parameter efficient models and effective use of training data. At a time when robustness to unseen deformations is becoming increasingly important, we present H-NeXt, which bridges the gap between equivariance and invariance. H-NeXt is a parameter-efficient roto-translation invariant network that is trained without a single augmented image in the training set. Our network comprises three components: an equivariant backbone for learning roto-translation independent features, an invariant pooling layer for discarding roto-translation information, and a classification layer. H-NeXt outperforms the state of the art in classification on unaugmented training sets and augmented test sets of MNIST and CIFAR-10.