SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 30513075 of 8378 papers

TitleStatusHype
Few-shot learning via tensor hallucinationCode0
Few-shot learning for COVID-19 Chest X-Ray Classification with Imbalanced Data: An Inter vs. Intra Domain StudyCode0
Few-shot learning through contextual data augmentationCode0
Few-Shot Specific Emitter Identification via Hybrid Data Augmentation and Deep Metric LearningCode0
Few-Shot Class Incremental Learning via Robust Transformer ApproachCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
Analysis and Optimization of Convolutional Neural Network ArchitecturesCode0
Fetal-BET: Brain Extraction Tool for Fetal MRICode0
Few-Shot Continual Learning via Flat-to-Wide ApproachesCode0
Achieving Verified Robustness to Symbol Substitutions via Interval Bound PropagationCode0
Data Augmentation in a Hybrid Approach for Aspect-Based Sentiment AnalysisCode0
Classification Beats Regression: Counting of Cells from Greyscale Microscopic Images based on Annotation-free Training SamplesCode0
Data Augmentation Generative Adversarial NetworksCode0
Aesthetic Discrimination of Graph LayoutsCode0
Data Augmentation for Object Detection via Progressive and Selective Instance-SwitchingCode0
Foresee What You Will Learn: Data Augmentation for Domain Generalization in Non-stationary EnvironmentCode0
Analysing the Robustness of Dual Encoders for Dense Retrieval Against MisspellingsCode0
Feature transforms for image data augmentationCode0
Classification of Bark Beetle-Induced Forest Tree Mortality using Deep LearningCode0
Revisiting Cross-Modal Knowledge Distillation: A Disentanglement Approach for RGBD Semantic SegmentationCode0
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural NetworksCode0
Fast Mixing of Data Augmentation Algorithms: Bayesian Probit, Logit, and Lasso RegressionCode0
FastIF: Scalable Influence Functions for Efficient Model Interpretation and DebuggingCode0
FaultFormer: Pretraining Transformers for Adaptable Bearing Fault ClassificationCode0
Feature Expansion and enhanced Compression for Class Incremental LearningCode0
Show:102550
← PrevPage 123 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified