SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 511520 of 1808 papers

TitleStatusHype
Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary0
A Classification-Guided Approach for Adversarial Attacks against Neural Machine TranslationCode0
On-Manifold Projected Gradient Descent0
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model ModificationCode1
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain DetectionCode0
Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic GraphsCode0
On the Adversarial Robustness of Multi-Modal Foundation ModelsCode1
Enhancing Adversarial Attacks: The Similar Target MethodCode0
Hiding Backdoors within Event Sequence Data via Poisoning Attacks0
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method0
Show:102550
← PrevPage 52 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified