SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16011610 of 1808 papers

TitleStatusHype
Residue-Based Natural Language Adversarial Attack DetectionCode0
Resilience of Named Entity Recognition Models under Adversarial AttackCode0
KGPA: Robustness Evaluation for Large Language Models via Cross-Domain Knowledge GraphsCode0
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language ExplanationsCode0
Knowledge Distillation with Adversarial Samples Supporting Decision BoundaryCode0
Adversarial and Clean Data Are Not TwinsCode0
Adversarial Training for Physics-Informed Neural NetworksCode0
Accelerated Stochastic Gradient-free and Projection-free MethodsCode0
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature RandomizationCode0
XSS Adversarial Attacks Based on Deep Reinforcement Learning: A Replication and Extension StudyCode0
Show:102550
← PrevPage 161 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified