SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 926950 of 8378 papers

TitleStatusHype
On Adversarial Robustness of Trajectory Prediction for Autonomous VehiclesCode1
Motion-Focused Contrastive Learning of Video RepresentationsCode1
Learning Fair Node Representations with Graph Counterfactual FairnessCode1
Uncertainty-Aware Cascaded Dilation Filtering for High-Efficiency DerainingCode1
EM-driven unsupervised learning for efficient motion segmentationCode1
A 1D CNN for high accuracy classification and transfer learning in motor imagery EEG-based brain-computer interfaceCode1
AutoBalance: Optimized Loss Functions for Imbalanced DataCode1
On the Cross-dataset Generalization in License Plate RecognitionCode1
CADTransformer: Panoptic Symbol Spotting Transformer for CAD DrawingsCode1
MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object DetectionCode1
Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and BeyondCode1
Role of Data Augmentation Strategies in Knowledge Distillation for Wearable Sensor DataCode1
PRIME: A few primitives can boost robustness to common corruptionsCode1
MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and GroundingCode1
Watermarking Images in Self-Supervised Latent SpacesCode1
High Fidelity Visualization of What Your Self-Supervised Representation Knows AboutCode1
Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise ImagesCode1
Deep Hash Distillation for Image RetrievalCode1
Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed ClassificationCode1
On the use of Cortical Magnification and Saccades as Biological Proxies for Data AugmentationCode1
Improving Compositional Generalization with Latent Structure and Data AugmentationCode1
CT4Rec: Simple yet Effective Consistency Training for Sequential RecommendationCode1
Stereoscopic Universal Perturbations across Different Architectures and DatasetsCode1
PixMix: Dreamlike Pictures Comprehensively Improve Safety MeasuresCode1
Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy DetectionCode1
Show:102550
← PrevPage 38 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified