SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16911700 of 1808 papers

TitleStatusHype
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Robustness-aware Automatic Prompt OptimizationCode0
A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification ModelsCode0
Targeted Adversarial Attacks against Neural Machine TranslationCode0
Model-Agnostic Defense for Lane Detection against Adversarial AttackCode0
Adversarial Privacy-preserving FilterCode0
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision MakingCode0
2D-Malafide: Adversarial Attacks Against Face Deepfake Detection SystemsCode0
Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttackCode0
Detecting Adversarial Examples in Batches -- a geometrical approachCode0
Show:102550
← PrevPage 170 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified