SOTAVerified

Complementary-Label Learning for Arbitrary Losses and Models

2018-10-10Proceedings of the 36th International Conference on Machine Learning, 2019Code Available1· sign in to hype

Takashi Ishida, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal. Not only is this beneficial for the learning stage, it also makes model/hyper-parameter selection (through cross-validation) possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient ascent trick, and demonstrate its superiority through experiments.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Kuzushiji-MNISTComplementary-Label LearningAccuracy67.1Unverified

Reproductions