SOTAVerified

T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor

2019-04-04CVPR 2019Unverified0· sign in to hype

Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos, Maja Pantic

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent findings indicate that over-parametrization, while crucial for successfully training deep neural networks, also introduces large amounts of redundancy. Tensor methods have the potential to efficiently parametrize over-complete representations by leveraging this redundancy. In this paper, we propose to fully parametrize Convolutional Neural Networks (CNNs) with a single high-order, low-rank tensor. Previous works on network tensorization have focused on parametrizing individual layers (convolutional or fully connected) only, and perform the tensorization layer-by-layer separately. In contrast, we propose to jointly capture the full structure of a neural network by parametrizing it with a single high-order tensor, the modes of which represent each of the architectural design parameters of the network (e.g. number of convolutional blocks, depth, number of stacks, input features, etc). This parametrization allows to regularize the whole network and drastically reduce the number of parameters. Our model is end-to-end trainable and the low-rank structure imposed on the weight tensor acts as an implicit regularization. We study the case of networks with rich structure, namely Fully Convolutional Networks (FCNs), which we propose to parametrize with a single 8th-order tensor. We show that our approach can achieve superior performance with small compression rates, and attain high compression rates with negligible drop in accuracy for the challenging task of human pose estimation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MPII Human PoseTucker T-NetPCKh-0.587.5Unverified

Reproductions