SOTAVerified

Webly Supervised Image Classification with Self-Contained Confidence

2020-08-27ECCV 2020Code Available0· sign in to hype

Jingkang Yang, Litong Feng, Weirong Chen, Xiaopeng Yan, Huabin Zheng, Ping Luo, Wayne Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper focuses on webly supervised learning (WSL), where datasets are built by crawling samples from the Internet and directly using search queries as web labels. Although WSL benefits from fast and low-cost data collection, noises in web labels hinder better performance of the image classification model. To alleviate this problem, in recent works, self-label supervised loss L_s is utilized together with webly supervised loss L_w. L_s relies on pseudo labels predicted by the model itself. Since the correctness of the web label or pseudo label is usually on a case-by-case basis for each web sample, it is desirable to adjust the balance between L_s and L_w on sample level. Inspired by the ability of Deep Neural Networks (DNNs) in confidence prediction, we introduce Self-Contained Confidence (SCC) by adapting model uncertainty for WSL setting, and use it to sample-wisely balance L_s and L_w. Therefore, a simple yet effective WSL framework is proposed. A series of SCC-friendly regularization approaches are investigated, among which the proposed graph-enhanced mixup is the most effective method to provide high-quality confidence to enhance our framework. The proposed WSL framework has achieved the state-of-the-art results on two large-scale WSL datasets, WebVision-1000 and Food101-N. Code is available at https://github.com/bigvideoresearch/SCC.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WebVision-1000SCC (ResNet50-D)Top-1 Accuracy75.78Unverified

Reproductions