SOTAVerified

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

2017-04-13Code Available1· sign in to hype

Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Shin Ishii

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
cifar10, 250 LabelsVATPercentage correct63.97Unverified
CIFAR-10, 250 LabelsVATPercentage error36.03Unverified
CIFAR-10, 4000 LabelsVAT+EntMinPercentage error10.55Unverified
CIFAR-10, 4000 LabelsVATPercentage error11.36Unverified
SVHN, 1000 labelsVATAccuracy94.58Unverified
SVHN, 250 LabelsVATAccuracy91.59Unverified

Reproductions