SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 53515375 of 8378 papers

TitleStatusHype
AAVAE: Augmentation-Augmented Variational Autoencoders0
Theoretical Analysis of Consistency Regularization with Limited Augmented Data0
Best Practices in Pool-based Active Learning for Image Classification0
Autoregressive Latent Video Prediction with High-Fidelity Image Generator0
SketchODE: Learning neural sketch representation in continuous time0
What Makes Better Augmentation Strategies? Augment Difficult but Not too Different0
Neuro-Symbolic Ontology-Mediated Query Answering0
Mistake-driven Image Classification with FastGAN and SpinalNet0
Noisy Adversarial Training0
Self-Supervised Learning of Motion-Informed Latents0
CausalDyna: Improving Generalization of Dyna-style Reinforcement Learning via Counterfactual-Based Data Augmentation0
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning0
Contrastive Learning is Just Meta-Learning0
Learnability and Expressiveness in Self-Supervised Learning0
Vicinal Counting Networks0
Latent Feature Disentanglement For Visual Domain Generalization0
Adaptive Unbiased Teacher for Cross-Domain Object Detection0
Understanding the Success of Knowledge Distillation -- A Data Augmentation Perspective0
Multi-Task Distribution Learning0
Approximate Bijective Correspondence for isolating factors of variationCode0
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet TrainingCode1
DM-CT: Consistency Training with Data and Model Perturbation0
SynCLR: A Synthesis Framework for Contrastive Learning of out-of-domain Speech Representations0
AutoCoG: A Unified Data-Modal Co-Search Framework for Graph Neural Networks0
CrossMatch: Cross-Classifier Consistency Regularization for Open-Set Single Domain Generalization0
Show:102550
← PrevPage 215 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified