Adversarial Attack
An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks
Papers
Showing 21–30 of 1808 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Xu et al. | Attack: PGD20 | 78.68 | — | Unverified |
| 2 | 3-ensemble of multi-resolution self-ensembles | Attack: AutoAttack | 78.13 | — | Unverified |
| 3 | TRADES-ANCRA/ResNet18 | Attack: AutoAttack | 59.7 | — | Unverified |
| 4 | AdvTraining [madry2018] | Attack: PGD20 | 48.44 | — | Unverified |
| 5 | TRADES [zhang2019b] | Attack: PGD20 | 45.9 | — | Unverified |
| 6 | XU-Net | Robust Accuracy | 1 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | 3-ensemble of multi-resolution self-ensembles | Attack: AutoAttack | 51.28 | — | Unverified |
| 2 | multi-resolution self-ensembles | Attack: AutoAttack | 47.85 | — | Unverified |