SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11511160 of 1808 papers

TitleStatusHype
Input-specific Attention Subnetworks for Adversarial Detection0
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning0
Defense Against Explanation Manipulation0
Adversarial Attack against Cross-lingual Knowledge Graph Alignment0
An Actor-Critic Method for Simulation-Based Optimization0
AdvCodeMix: Adversarial Attack on Code-Mixed Data0
Disrupting Deep Uncertainty Estimation Without Harming AccuracyCode0
Generating Watermarked Adversarial Texts0
Covariate Balancing Methods for Randomized Controlled Trials Are Not Adversarially Robust0
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations0
Show:102550
← PrevPage 116 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified