SOTAVerified

Preventing Manifold Intrusion with Locality: Local Mixup

2022-01-12Code Available0· sign in to hype

Raphael Baena, Lucas Drumetz, Vincent Gripon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Mixup is a data-dependent regularization technique that consists in linearly interpolating input samples and associated outputs. It has been shown to improve accuracy when used to train on standard machine learning datasets. However, authors have pointed out that Mixup can produce out-of-distribution virtual samples and even contradictions in the augmented training set, potentially resulting in adversarial effects. In this paper, we introduce Local Mixup in which distant input samples are weighted down when computing the loss. In constrained settings we demonstrate that Local Mixup can create a trade-off between bias and variance, with the extreme cases reducing to vanilla training and classical Mixup. Using standardized computer vision benchmarks , we also show that Local Mixup can improve test accuracy.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10Local Mixup Resnet18Percentage correct95.97Unverified
Fashion-MNISTLocal Mixup DenseNetPercentage error5.97Unverified
SVHNLocal Mixup LeNetPercentage error8.2Unverified

Reproductions