SOTAVerified

Learning Canonical Transformations

2020-11-17Unverified0· sign in to hype

Zachary Dulberg, Jonathan Cohen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Humans understand a set of canonical geometric transformations (such as translation and rotation) that support generalization by being untethered to any specific object. We explore inductive biases that help a neural network model learn these transformations in pixel space in a way that can generalize out-of-domain. Specifically, we find that high training set diversity is sufficient for the extrapolation of translation to unseen shapes and scales, and that an iterative training scheme achieves significant extrapolation of rotation in time.

Tasks

Reproductions