SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 46514675 of 8378 papers

TitleStatusHype
Redefining Self-Normalization Property0
Reduced Jeffries-Matusita distance: A Novel Loss Function to Improve Generalization Performance of Deep Classification Models0
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting Contrastive Learning for Text Classification0
Reducing Distraction in Long-Context Language Models by Focused Learning0
Reducing false positives in strong lens detection through effective augmentation and ensemble learning0
Reducing Gender Bias in Abusive Language Detection0
Reducing Overfitting in Deep Networks by Decorrelating Representations0
REFINE on Scarce Data: Retrieval Enhancement through Fine-Tuning via Model Fusion of Embedding Models0
Refining Corpora from a Model Calibration Perspective for Chinese Spelling Correction0
ReFormer: Generating Radio Fakes for Data Augmentation0
ReF -- Rotation Equivariant Features for Local Feature Matching0
Region-based Convolution Neural Network Approach for Accurate Segmentation of Pelvic Radiograph0
Region Mixup0
Deep Learning Methods and Applications for Region of Interest Detection in Dermoscopic Images0
Regularising Deep Networks with Deep Generative Models0
Regularising for invariance to data augmentation improves supervised learning0
Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling0
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning0
Regularizing Contrastive Predictive Coding for Speech Applications0
Regularizing Neural Networks with Meta-Learning Generative Models0
Reinforcement Learning from Diffusion Feedback: Q* for Image Search0
Reinforcement Learning with Imbalanced Dataset for Data-to-Text Medical Report Generation0
Relate auditory speech to EEG by shallow-deep attention-based network0
Relational Data Selection for Data Augmentation of Speaker-dependent Multi-band MelGAN Vocoder0
Relation-Aware Graph Foundation Model0
Show:102550
← PrevPage 187 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified