SOTAVerified

H-NeXt: The next step towards roto-translation invariant networks

2023-11-02Code Available0· sign in to hype

Tomas Karella, Filip Sroubek, Jan Flusser, Jan Blazek, Vasek Kosik

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The widespread popularity of equivariant networks underscores the significance of parameter efficient models and effective use of training data. At a time when robustness to unseen deformations is becoming increasingly important, we present H-NeXt, which bridges the gap between equivariance and invariance. H-NeXt is a parameter-efficient roto-translation invariant network that is trained without a single augmented image in the training set. Our network comprises three components: an equivariant backbone for learning roto-translation independent features, an invariant pooling layer for discarding roto-translation information, and a classification layer. H-NeXt outperforms the state of the art in classification on unaugmented training sets and augmented test sets of MNIST and CIFAR-10.

Tasks

Reproductions