SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 52515275 of 8378 papers

TitleStatusHype
Network Augmentation for Tiny Deep Learning0
Can Synthetic Translations Improve Bitext Quality?0
Curriculum Data Augmentation for Low-Resource Slides Summarization0
Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference0
Towards Robust Waveform-Based Acoustic Models0
Virtual Augmentation Supported Contrastive Learning of Sentence RepresentationsCode1
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language ModelsCode1
Unifying Cross-lingual Summarization and Machine Translation with Compression RateCode0
Reappraising Domain Generalization in Neural Networks0
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without RetrainingCode1
Towards Identity Preserving Normal to Dysarthric Voice Conversion0
From Start to Finish: Latency Reduction Strategies for Incremental Speech Synthesis in Simultaneous Speech-to-Speech Translation0
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation0
Interpreting the Robustness of Neural NLP Models to Textual Perturbations0
Augmenting Imitation Experience via Equivariant Representations0
Semantically Distributed Robust Optimization for Vision-and-Language InferenceCode0
Retrieval-guided Counterfactual Generation for QA0
RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking0
Deep learning models for predicting RNA degradation via dual crowdsourcingCode0
DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models0
Domain generalization in deep learning for contrast-enhanced imaging0
IB-GAN: A Unified Approach for Multivariate Time Series Classification under Class Imbalance0
Nuisance-Label Supervision: Robustness Improvement by Free Labels0
Context-gloss Augmentation for Improving Word Sense Disambiguation0
Style-based quantum generative adversarial networks for Monte Carlo eventsCode1
Show:102550
← PrevPage 211 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified