SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 38763900 of 8378 papers

TitleStatusHype
HIPODE: Enhancing Offline Reinforcement Learning with High-Quality Synthetic Data from a Policy-Decoupled Approach0
Advancing Seq2seq with Joint Paraphrase Learning0
Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation0
D4: Text-guided diffusion model-based domain adaptive data augmentation for vineyard shoot detection0
Augmented Data as an Auxiliary Plug-in Towards Categorization of Crowdsourced Heritage Data0
DAAS: Differentiable Architecture and Augmentation Policy Search0
HMM-based data augmentation for E2E systems for building conversational speech synthesis systems0
Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation0
DACB-Net: Dual Attention Guided Compact Bilinear Convolution Neural Network for Skin Disease Classification0
Ani-GIFs: A benchmark dataset for domain generalization of action recognition from GIFs0
BloomVQA: Assessing Hierarchical Multi-modal Comprehension0
DiCOVA-Net: Diagnosing COVID-19 using Acoustics based on Deep Residual Network for the DiCOVA Challenge 20210
How Does Data Diversity Shape the Weight Landscape of Neural Networks?0
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?0
How Does Mixup Help With Robustness and Generalization?0
Advancing Sentiment Analysis in Tamil-English Code-Mixed Texts: Challenges and Transformer-Based Solutions0
Block-SCL: Blocking Matters for Supervised Contrastive Learning in Product Matching0
DialoGPS: Dialogue Path Sampling in Continuous Semantic Space for Data Augmentation in Multi-Turn Conversations0
How Effective is Task-Agnostic Data Augmentation for Pretrained Transformers?0
DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks0
Dialect Adaptation and Data Augmentation for Low-Resource ASR: TalTech Systems for the MADASR 2023 Challenge0
Blocks2World: Controlling Realistic Scenes with Editable Primitives0
How many labeled license plates are needed?0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Improving Conditioning in Context-Aware Sequence to Sequence Models0
Show:102550
← PrevPage 156 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified