SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14511460 of 1808 papers

TitleStatusHype
Minority Reports Defense: Defending Against Adversarial Patches0
Transferable Perturbations of Deep Feature Distributions0
Towards Feature Space Adversarial AttackCode1
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model0
On the Optimal Interaction Range for Multi-Agent Systems Under Adversarial Attack0
Improved Adversarial Training via Learned Optimizer0
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers0
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty0
Adversarial Attacks and Defenses: An Interpretation Perspective0
BERT-ATTACK: Adversarial Attack Against BERT Using BERTCode1
Show:102550
← PrevPage 146 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified