SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 25012525 of 8378 papers

TitleStatusHype
Efficient Semi-supervised Consistency Training for Natural Language Understanding0
Read and Think: An Efficient Step-wise Multimodal Language Model for Document Understanding and Reasoning0
Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness0
Are Deep Learning Models Robust to Partial Object Occlusion in Visual Recognition Tasks?0
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines0
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models0
Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning0
Efficient sign language recognition system and dataset creation method based on deep learning and image processing0
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget0
Embedding Space Augmentation for Weakly Supervised Learning in Whole-Slide Images0
Encoding Power Traces as Images for Efficient Side-Channel Analysis0
Center-wise Local Image Mixture For Contrastive Representation Learning0
Center-aware Adversarial Augmentation for Single Domain Generalization0
A Recurrent YOLOv8-based framework for Event-Based Object Detection0
Are Current Task-oriented Dialogue Systems Able to Satisfy Impolite Users?0
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions0
Efficient Neural Network Training via Subset Pretraining0
CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data0
Are conditional GANs explicitly conditional?0
Adversarial Training for Patient-Independent Feature Learning with IVOCT Data for Plaque Classification0
CDUPatch: Color-Driven Universal Adversarial Patch Attack for Dual-Modal Visible-Infrared Detectors0
CDS: Data Synthesis Method Guided by Cognitive Diagnosis Theory0
NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification0
Efficient Out-of-Distribution Detection via CVAE data Generation0
CCFace: Classification Consistency for Low-Resolution Face Recognition0
Show:102550
← PrevPage 101 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified