SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 73267350 of 8378 papers

TitleStatusHype
Removing Geometric Bias in One-Class Anomaly Detection with Adaptive Feature PerturbationCode0
Long Term Stock Prediction based on Financial StatementsCode0
Removing Undesirable Feature Contributions Using Out-of-Distribution DataCode0
Hard Regularization to Prevent Deep Online Clustering Collapse without Data AugmentationCode0
Look Beyond Bias with Entropic Adversarial Data AugmentationCode0
Tight PAC-Bayesian Risk Certificates for Contrastive LearningCode0
Handling Syntactic Divergence in Low-resource Machine TranslationCode0
Data-Efficient Augmentation for Training Neural NetworksCode0
A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase GenerationCode0
Data-Driven Self-Supervised Graph Representation LearningCode0
Tilt your Head: Activating the Hidden Spatial-Invariance of ClassifiersCode0
Low-Complexity Acoustic Scene Classification Using Parallel Attention-Convolution NetworkCode0
Beyond Domain Randomization: Event-Inspired Perception for Visually Robust Adversarial Imitation from VideosCode0
SpliceMix: A Cross-scale and Semantic Blending Augmentation Strategy for Multi-label Image ClassificationCode0
Representation Learning for Clustering via Building ConsensusCode0
SplitMixer: Fat Trimmed From MLP-like ModelsCode0
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMsCode0
Data Augmentation for Generating Synthetic Electrogastrogram Time SeriesCode0
Low-Resource Court Judgment Summarization for Common Law SystemsCode0
Subspace-Configurable NetworksCode0
Data Distribution Bottlenecks in Grounding Language Models to Knowledge BasesCode0
Reprint: a randomized extrapolation based on principal components for data augmentationCode0
Approximate Bijective Correspondence for isolating factors of variationCode0
Use of speaker recognition approaches for learning and evaluating embedding representations of musical instrument soundsCode0
Habaek: High-performance water segmentation through dataset expansion and inductive bias optimizationCode0
Show:102550
← PrevPage 294 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified