SOTAVerified

Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training

2020-11-30Code Available1· sign in to hype

Shuai Zhao, Liguang Zhou, Wenxiao Wang, Deng Cai, Tin Lun Lam, Yangsheng Xu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The width of a neural network matters since increasing the width will necessarily increase the model capacity. However, the performance of a network does not improve linearly with the width and soon gets saturated. In this case, we argue that increasing the number of networks (ensemble) can achieve better accuracy-efficiency trade-offs than purely increasing the width. To prove it, one large network is divided into several small ones regarding its parameters and regularization components. Each of these small networks has a fraction of the original one's parameters. We then train these small networks together and make them see various views of the same data to increase their diversity. During this co-training process, networks can also learn from each other. As a result, small networks can achieve better ensemble performance than the large one with few or no extra parameters or FLOPs, , achieving better accuracy-efficiency trade-offs. Small networks can also achieve faster inference speed than the large one by concurrent running. All of the above shows that the number of networks is a new dimension of model scaling. We validate our argument with 8 different neural architectures on common benchmarks through extensive experiments. The code is available at https://github.com/FreeformRobotics/Divide-and-Co-training.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10PyramidNet-272, S=4Percentage correct98.71Unverified
CIFAR-10WRN-40-10, S=4Percentage correct98.38Unverified
CIFAR-10WRN-28-10, S=4Percentage correct98.32Unverified
CIFAR-10Shake-Shake 26 2x96d, S=4Percentage correct98.31Unverified
CIFAR-100PyramidNet-272, S=4Percentage correct89.46Unverified
CIFAR-100DenseNet-BC-190, S=4Percentage correct87.44Unverified
CIFAR-100WRN-40-10, S=4Percentage correct86.9Unverified
CIFAR-100WRN-28-10, S=4Percentage correct85.74Unverified
ImageNetSE-ResNeXt-101, 64x4d, S=2(320px)Top 1 Accuracy83.6Unverified
ImageNetSE-ResNeXt-101, 64x4d, S=2(416px)Top 1 Accuracy83.34Unverified
ImageNetResNeXt-101, 64x4d, S=2(224px)Top 1 Accuracy82.13Unverified

Reproductions