SOTAVerified

The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction

2021-01-18Code Available1· sign in to hype

Anh Thai, Stefan Stojanov, Zixuan Huang, Isaac Rehg, James M. Rehg

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Continual learning has been extensively studied for classification tasks with methods developed to primarily avoid catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we present a set of continual 3D object shape reconstruction tasks, including complete 3D shape reconstruction from different input modalities, as well as visible surface (2.5D) reconstruction which, surprisingly demonstrate positive knowledge (backward and forward) transfer when training with solely standard SGD and without additional heuristics. We provide evidence that continuously updated representation learning of single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide a novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec

Tasks

Reproductions