SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 79517975 of 8378 papers

TitleStatusHype
How many perturbations break this model? Evaluating robustness beyond adversarial accuracyCode0
Towards Multimodal Video Paragraph Captioning Models Robust to Missing ModalityCode0
Variational Bayesian Bow tie Neural Networks with ShrinkageCode0
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMaxCode0
Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image GeneratorsCode0
Variational Bayes In Private Settings (VIPS)Code0
15,500 Seconds: Lean UAV Classification Leveraging PEFT and Pre-Trained NetworksCode0
Contextual Out-of-Domain Utterance Handling With Counterfeit Data AugmentationCode0
Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data AugmentationCode0
Lisbon Computational Linguists at SemEval-2024 Task 2: Using A Mistral 7B Model and Data AugmentationCode0
Synthetic Magnetic Resonance Images with Generative Adversarial NetworksCode0
Exemplar Masking for Multimodal Incremental LearningCode0
NTIRE 2023 Image Shadow Removal Challenge Technical Report: Team IIM_TTICode0
Synthetic Occlusion Augmentation with Volumetric Heatmaps for the 2018 ECCV PoseTrack Challenge on 3D Human Pose EstimationCode0
Selective Attention Merging for low resource tasks: A case study of Child ASRCode0
Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data ImbalanceCode0
Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair GenerationCode0
Numeric Encoding Options with AutomungeCode0
Contextual Augmentation: Data Augmentation by Words with Paradigmatic RelationsCode0
Selective Style Transfer for TextCode0
Understanding the Role of Mixup in Knowledge Distillation: An Empirical StudyCode0
Selective Text Augmentation with Word Roles for Low-Resource Text ClassificationCode0
Exact Fusion via Feature Distribution Matching for Few-shot Image GenerationCode0
Exact Bayesian Gaussian Cox Processes Using Random IntegralCode0
Select-Mosaic: Data Augmentation Method for Dense Small Object ScenesCode0
Show:102550
← PrevPage 319 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified