SOTAVerified

Data Augmentation

Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.

Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.

Further readings:

( Image credit: Albumentations )

Papers

Showing 12511275 of 8378 papers

TitleStatusHype
Generalizable Visual Reinforcement Learning with Segment Anything ModelCode1
Generalization in Reinforcement Learning by Soft Data AugmentationCode1
Controllable Dialogue Simulation with In-Context LearningCode1
Controllable Data Augmentation for Few-Shot Text Mining with Chain-of-Thought Attribute ManipulationCode1
A Competitive Method for Dog Nose-print Re-identificationCode1
Generation of microbial colonies dataset with deep learning style transferCode1
Learning from Counterfactual Links for Link PredictionCode1
Generative Adversarial NetworksCode1
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design ModelsCode1
Generative Data Augmentation for Aspect Sentiment Quad PredictionCode1
Generative Dataset Distillation Based on Diffusion ModelCode1
GeNet: A Graph Neural Network-based Anti-noise Task-Oriented Semantic Communication ParadigmCode1
3D U-Net: Learning Dense Volumetric Segmentation from Sparse AnnotationCode1
GenMapping: Unleashing the Potential of Inverse Perspective Mapping for Robust Online HD Map ConstructionCode1
Boosted Neural Decoders: Achieving Extreme Reliability of LDPC Codes for 6G NetworksCode1
GeomGCL: Geometric Graph Contrastive Learning for Molecular Property PredictionCode1
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processesCode1
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNsCode1
GMML is All you NeedCode1
GOLD: Improving Out-of-Scope Detection in Dialogues using Data AugmentationCode1
Contrast and Classify: Training Robust VQA ModelsCode1
Contrastive Code Representation LearningCode1
Graph-level Representation Learning with Joint-Embedding Predictive ArchitecturesCode1
Graph Masked Autoencoder for Sequential RecommendationCode1
AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language ModelsCode1
Show:102550
← PrevPage 51 of 336Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeiT-B (+MixPro)Accuracy (%)82.9Unverified
2ResNet-200 (DeepAA)Accuracy (%)81.32Unverified
3DeiT-S (+MixPro)Accuracy (%)81.3Unverified
4ResNet-200 (Fast AA)Accuracy (%)80.6Unverified
5ResNet-200 (UA)Accuracy (%)80.4Unverified
6ResNet-200 (AA)Accuracy (%)80Unverified
7ResNet-50 (DeepAA)Accuracy (%)78.3Unverified
8ResNet-50 (TA wide)Accuracy (%)78.07Unverified
9ResNet-50 (LoRot-E)Accuracy (%)77.72Unverified
10ResNet-50 (LoRot-I)Accuracy (%)77.71Unverified
#ModelMetricClaimedVerifiedStatus
1WideResNet-40-2 (Faster AA)Percentage error3.7Unverified
2Shake-Shake (26 2×32d) (Faster AA)Percentage error2.7Unverified
3WideResNet-28-10 (Faster AA)Percentage error2.6Unverified
4Shake-Shake (26 2×112d) (Faster AA)Percentage error2Unverified
5Shake-Shake (26 2×96d) (Faster AA)Percentage error2Unverified
#ModelMetricClaimedVerifiedStatus
1DiffAugClassification Accuracy92.7Unverified
2PaCMAPClassification Accuracy85.3Unverified
3hNNEClassification Accuracy77.4Unverified
4TopoAEClassification Accuracy74.6Unverified