Adversarial Attack
An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks
Papers
Showing 1–10 of 1808 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | 3-ensemble of multi-resolution self-ensembles | Attack: AutoAttack | 51.28 | — | Unverified |
| 2 | multi-resolution self-ensembles | Attack: AutoAttack | 47.85 | — | Unverified |