SOTAVerified

Rotation equivariant vector field networks

2016-12-29ICCV 2017Code Available0· sign in to hype

Diego Marcos, Michele Volpi, Nikos Komodakis, Devis Tuia

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs' rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PCamVF-CNN (C12)AUC0.9Unverified
PCamVF-CNN (C8)AUC0.88Unverified
PCamVF-CNN (C4)AUC0.87Unverified

Reproductions