SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 68266850 of 8378 papers

TitleStatusHype
Disentangling style and content for low resource video domain adaptation: a case study on keystroke inference attacks0
Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations0
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect0
Disfluency Detection with Unlabeled Data and Small BERT Models0
Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data0
Distillation of Diffusion Features for Semantic Correspondence0
Distillation Using Oracle Queries for Transformer-Based Human-Object Interaction Detection0
Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing0
Distilling Calibrated Student from an Uncalibrated Teacher0
Augmenting Offline Reinforcement Learning with State-only Interactions0
Distilling Large Language Models into Tiny and Effective Students using pQRNN0
Distilling Transformers for Neural Cross-Domain Search0
Distortion-Adaptive Grape Bunch Counting for Omnidirectional Images0
DistractFlow: Improving Optical Flow Estimation via Realistic Distractions and Pseudo-Labeling0
Distributionally Robust Cross Subject EEG Decoding0
Distribution augmentation for low-resource expressive text-to-speech0
Diverse Ensembles Improve Calibration0
Diverse Parallel Data Synthesis for Cross-Database Adaptation of Text-to-SQL Parsers0
Diversified Augmentation with Domain Adaptation for Debiased Video Temporal Grounding0
Diversity-Oriented Data Augmentation with Large Language Models0
Dizygotic Conditional Variational AutoEncoder for Multi-Modal and Partial Modality Absent Few-Shot Learning0
DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness0
DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness0
DMCNN: A Deep Multiscale Convolutional Neural Network Model for Medical Image Segmentation0
DM-CT: Consistency Training with Data and Model Perturbation0
Show:102550
← PrevPage 274 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified