SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 53765400 of 8378 papers

TitleStatusHype
Identity-Disentangled Adversarial Augmentation for Self-supervised Learning0
Where is the bottleneck in long-tailed classification?0
Improving Discriminative Visual Representation Learning via Automatic Mixup0
Stochastic Training is Not Necessary for GeneralizationCode1
Segmentation of Roads in Satellite Images using specially modified U-Net CNNs0
Adaptive Multi-layer Contrastive Graph Neural Networks0
Learning Background Invariance Improves Generalization and Robustness in Self-Supervised Learning on ImageNet and Beyond0
Prediction of the Facial Growth Direction is Challenging0
Layer-Parallel Training of Residual Networks with Auxiliary Variables0
Optimized Automated Cardiac MR Scar Quantification with GAN-Based Data AugmentationCode0
A novel network training approach for open set image recognition0
DAMix: A Density-Aware Mixup Augmentation for Single Image Dehazing under Domain Shift0
Excavating the Potential Capacity of Self-Supervised Monocular Depth EstimationCode1
A real-time and high-precision method for small traffic-signs recognitionCode1
Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular Schwannoma and Cochlea with Data Augmentation and Model Ensemble0
SAIS: Supervising and Augmenting Intermediate Steps for Document-Level Relation ExtractionCode1
GeomGCL: Geometric Graph Contrastive Learning for Molecular Property PredictionCode1
A Diversity-Enhanced and Constraints-Relaxed Augmentation for Low-Resource Classification0
Faithful Target Attribute Prediction in Neural Machine TranslationCode0
Dense Contrastive Visual-Linguistic Pretraining0
Multi-view Contrastive Self-Supervised Learning of Accounting Data Representations for Downstream Audit Tasks0
Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical OverlapCode0
Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing0
ChannelAugment: Improving generalization of multi-channel ASR by training with input channel randomization0
Benchmarking Augmentation Methods for Learning Robust Navigation Agents: the Winning Entry of the 2021 iGibson Challenge0
Show:102550
← PrevPage 216 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified