SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11511175 of 1808 papers

TitleStatusHype
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning0
Object-Attentional Untargeted Adversarial Attack0
Object-fabrication Targeted Attack for Object Detection0
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline0
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs0
On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks0
On Data Augmentation and Adversarial Risk: An Empirical Analysis0
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack0
Adversarial Patch Attacks on Monocular Depth Estimation Networks0
One for Many: an Instagram inspired black-box adversarial attack0
One-Index Vector Quantization Based Adversarial Attack on Image Classification0
Adversarial optimization leads to over-optimistic security-constrained dispatch, but sampling can help0
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention0
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers0
Adversarial Neon Beam: A Light-based Physical Attack to DNNs0
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness0
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness0
Only My Model On My Data: A Privacy Preserving Approach Protecting one Model and Deceiving Unauthorized Black-Box Models0
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration0
On-Manifold Projected Gradient Descent0
On Neural Network approximation of ideal adversarial attack and convergence of adversarial training0
Towards more transferable adversarial attack in black-box manner0
Adversarial Attacks and Defenses: An Interpretation Perspective0
Show:102550
← PrevPage 47 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified