Adversarial Robustness
Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.
Papers
Showing 1–10 of 1746 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Mixed classifier | Accuracy | 95.23 | — | Unverified |
| 2 | Stochastic-LWTA/PGD/WideResNet-34-10 | Accuracy | 92.26 | — | Unverified |
| 3 | Stochastic-LWTA/PGD/WideResNet-34-5 | Accuracy | 91.88 | — | Unverified |
| 4 | GLOT-DR | Accuracy | 84.13 | — | Unverified |
| 5 | TRADES-ANCRA/ResNet18 | Accuracy | 81.7 | — | Unverified |