Adversarial Robustness
Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.
Papers
Showing 1–10 of 1746 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | DeBERTa (single model) | Accuracy | 0.61 | — | Unverified |
| 2 | ALBERT (single model) | Accuracy | 0.59 | — | Unverified |
| 3 | T5 (single model) | Accuracy | 0.57 | — | Unverified |
| 4 | SMART_RoBERTa (single model) | Accuracy | 0.54 | — | Unverified |
| 5 | FreeLB (single model) | Accuracy | 0.5 | — | Unverified |
| 6 | RoBERTa (single model) | Accuracy | 0.5 | — | Unverified |
| 7 | InfoBERT (single model) | Accuracy | 0.46 | — | Unverified |
| 8 | ELECTRA (single model) | Accuracy | 0.42 | — | Unverified |
| 9 | BERT (single model) | Accuracy | 0.34 | — | Unverified |
| 10 | SMART_BERT (single model) | Accuracy | 0.3 | — | Unverified |