SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 476500 of 8378 papers

TitleStatusHype
Discriminative Feature Alignment: Improving Transferability of Unsupervised Domain Adaptation by Gaussian-guided Latent AlignmentCode1
Disentangled Representations for Domain-generalized Cardiac SegmentationCode1
Disentangling Factors of Variation with Cycle-Consistent Variational Auto-EncodersCode1
3D Common Corruptions and Data AugmentationCode1
Alternate Diverse Teaching for Semi-supervised Medical Image SegmentationCode1
DiffAug: Enhance Unsupervised Contrastive Learning with Domain-Knowledge-Free Diffusion-based Data AugmentationCode1
AltFreezing for More General Video Face Forgery DetectionCode1
Diversify Your Vision Datasets with Automatic Diffusion-Based AugmentationCode1
DLME: Deep Local-flatness Manifold EmbeddingCode1
Do Generated Data Always Help Contrastive Learning?Code1
Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy WeatherCode1
Domain-Specific Text Generation for Machine TranslationCode1
Don't Touch What Matters: Task-Aware Lipschitz Data Augmentation for Visual Reinforcement LearningCode1
Doubly Contrastive Deep ClusteringCode1
DreamDA: Generative Data Augmentation with Diffusion ModelsCode1
Breaking the Representation Bottleneck of Chinese Characters: Neural Machine Translation with Stroke Sequence ModelingCode1
NCAGC: A Neighborhood Contrast Framework for Attributed Graph ClusteringCode1
CAM Back Again: Large Kernel CNNs from a Weakly Supervised Object Localization PerspectiveCode1
Dynamic Data Augmentation with Gating Networks for Time Series RecognitionCode1
Closing the Gap between TD Learning and Supervised Learning -- A Generalisation Point of ViewCode1
Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource LanguagesCode1
Controllable Data Augmentation Through Deep RelightingCode1
ECNU-SenseMaker at SemEval-2020 Task 4: Leveraging Heterogeneous Knowledge Resources for Commonsense Validation and ExplanationCode1
EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signalsCode1
Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust FinetuningCode1
Show:102550
← PrevPage 20 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified