SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 18011825 of 8378 papers

TitleStatusHype
Towards Channel-Resilient CSI-Based RF Fingerprinting using Deep Learning0
Enhancing Effectiveness and Robustness in a Low-Resource Regime via Decision-Boundary-aware Data Augmentation0
Addressing Concept Shift in Online Time Series Forecasting: Detect-then-AdaptCode2
Vehicle Detection Performance in Nordic Region0
Your Image is My Video: Reshaping the Receptive Field via Image-To-Video Differentiable AutoAugmentation and Fusion0
LLM2LLM: Boosting LLMs with Novel Iterative Data EnhancementCode2
IUST at ClimateActivism 2024: Towards Optimal Stance Detection: A Systematic Study of Architectural Choices and Data Cleaning TechniquesCode0
NaNa and MiGu: Semantic Data Augmentation Techniques to Enhance Protein Classification in Graph Neural NetworksCode0
Estimating Physical Information Consistency of Channel Data Augmentation for Remote Sensing Images0
MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge DistillationCode1
What Matters for Active Texture Recognition With Vision-Based Tactile Sensors0
DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception0
RigorLLM: Resilient Guardrails for Large Language Models against Undesired ContentCode1
DreamDA: Generative Data Augmentation with Diffusion ModelsCode1
TexTile: A Differentiable Metric for Texture TileabilityCode1
XPose: eXplainable Human Pose Estimation0
Do Generated Data Always Help Contrastive Learning?Code1
TransformMix: Learning Transformation and Mixing Strategies from Data0
Federated Semi-supervised Learning for Medical Image Segmentation with intra-client and inter-client Consistency0
Automated Contrastive Learning Strategy Search for Time Series0
Sim2Real in Reconstructive Spectroscopy: Deep Learning with Augmented Device-Informed Data SimulationCode0
IPCL: Iterative Pseudo-Supervised Contrastive Learning to Improve Self-Supervised Feature RepresentationCode0
EffiPerception: an Efficient Framework for Various Perception Tasks0
Posterior Uncertainty Quantification in Neural Networks using Data AugmentationCode0
SETA: Semantic-Aware Token Augmentation for Domain GeneralizationCode1
Show:102550
← PrevPage 73 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified