SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 37013725 of 8378 papers

TitleStatusHype
Gibbs Max-margin Topic Models with Data Augmentation0
GIMM: InfoMin-Max for Automated Graph Contrastive Learning0
Boosting Few-Shot Segmentation via Instance-Aware Data Augmentation and Local Consensus Guided Cross Attention0
HyperTime: Implicit Neural Representation for Time Series0
Global Context Is All You Need for Parallel Efficient Tractography Parcellation0
DiFiC: Your Diffusion Model Holds the Secret to Fine-Grained Clustering0
Global Intervention and Distillation for Federated Out-of-Distribution Generalization0
Creation of Novel Soft Robot Designs using Generative AI0
Global Mixup: Eliminating Ambiguity with Clustering Relationships0
Global Mixup: Eliminating Ambiguity with Clustering0
Diffusion-Weighted Magnetic Resonance Brain Images Generation with Generative Adversarial Networks and Variational Autoencoders: A Comparison Study0
Boosting Event Extraction with Denoised Structure-to-Text Augmentation0
A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation0
Glyph Features Matter: A Multimodal Solution for EvaHan in LT4HALA20220
DiffusionRIR: Room Impulse Response Interpolation using Diffusion Models0
I2C at SemEval-2022 Task 4: Patronizing and Condescending Language Detection using Deep Learning Techniques0
Goal-Conditioned Data Augmentation for Offline Reinforcement Learning0
Goal-Embedded Dual Hierarchical Model for Task-Oriented Dialogue Generation0
Diffusion Prism: Enhancing Diversity and Morphology Consistency in Mask-to-Image Diffusion0
Diffusion Model with Clustering-based Conditioning for Food Image Generation0
Good Data Is All Imitation Learning Needs0
A Competitive Method to VIPriors Object Detection Challenge0
Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition0
Good-Enough Example Extrapolation0
Diffusion Models for Robotic Manipulation: A Survey0
Show:102550
← PrevPage 149 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified