SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 721730 of 1808 papers

TitleStatusHype
ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models0
ASVspoof 5: Design, Collection and Validation of Resources for Spoofing, Deepfake, and Adversarial Attack Detection Using Crowdsourced Speech0
Analyzing Robustness of the Deep Reinforcement Learning Algorithm in Ramp Metering Applications Considering False Data Injection Attack and Defense0
EVALOOP: Assessing LLM Robustness in Programming from a Self-consistency Perspective0
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples0
Evaluating Adversarial Robustness on Document Image Classification0
Defense-guided Transferable Adversarial Attacks0
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks0
Evaluating Neural Model Robustness for Machine Comprehension0
Adversarial Attack with Pattern Replacement0
Show:102550
← PrevPage 73 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified