SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 81518175 of 8378 papers

TitleStatusHype
Learning to Drive by Imitating Surrounding Vehicles0
Learning to fail: Predicting fracture evolution in brittle material models using recurrent graph convolutional neural networks0
Learning to Find Missing Video Frames with Synthetic Data Augmentation: A General Framework and Application in Generating Thermal Images Using RGB Cameras0
Learning to Generate Examples for Semantic Processing Tasks0
Learning to Generate Novel Classes for Deep Metric Learning0
Learning to Generate Synthetic Data via Compositing0
Learning to Hear Broken Motors: Signature-Guided Data Augmentation for Induction-Motor Diagnostics0
Learning to Identify Drilling Defects in Turbine Blades with Single Stage Detectors0
Learning to Ignore Adversarial Attacks0
Learning to Ignore Adversarial Attacks0
Learning to mask: Towards generalized face forgery detection0
Learning to Rank Question Answer Pairs with Bilateral Contrastive Data Augmentation0
Learning to Refine Human Pose Estimation0
Learning towards Selective Data Augmentation for Dialogue Generation0
Learning Transferable Object-Centric Diffeomorphic Transformations for Data Augmentation in Medical Image Segmentation0
Learning under selective labels in the presence of expert consistency0
Teaching What You Should Teach: A Data-Based Distillation Method0
Learning with Exact Invariances in Polynomial Time0
Toward Robust Graph Semi-Supervised Learning against Extreme Data Scarcity0
Learning with invariances in random features and kernel models0
Learning with Limited Text Data0
Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling0
LEAVES: Learning Views for Time-Series Data in Contrastive Learning0
LeCun at SemEval-2021 Task 6: Detecting Persuasion Techniques in Text Using Ensembled Pretrained Transformers and Data Augmentation0
Led3D: A Lightweight and Efficient Deep Approach to Recognizing Low-Quality 3D Faces0
Show:102550
← PrevPage 327 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified