SOTAVerified

FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

2019-07-03ICCV 2021Code Available0· sign in to hype

Xiangxiang Chu, Bo Zhang, Ruijun Xu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet. The models and their evaluation codes are made publicly available online http://github.com/fairnas/FairNAS .

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetFairNAS-CTop 1 Accuracy74.69Unverified
ImageNetFairNAS-BTop 1 Accuracy75.1Unverified
ImageNetFairNAS-ATop 1 Accuracy75.34Unverified

Reproductions