SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11211130 of 1808 papers

TitleStatusHype
Mutual-modality Adversarial Attack with Semantic Perturbation0
NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK0
Towards Certified Defense for Unrestricted Adversarial Attacks0
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data0
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty0
Near Optimal Adversarial Attacks on Stochastic Bandits and Defenses with Smoothed Responses0
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face Recognition using Neural Radiance Fields0
Adversarial Robustness for Deep Learning-based Wildfire Prediction Models0
ADMM based Distributed State Observer Design under Sparse Sensor Attacks0
Show:102550
← PrevPage 113 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified