SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 15011525 of 1746 papers

TitleStatusHype
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking0
Developing Assurance Cases for Adversarial Robustness and Regulatory Compliance in LLMs0
Canonical Latent Representations in Conditional Diffusion Models0
Differentially Private Adversarial Robustness Through Randomized Perturbations0
Differentially Private Optimizers Can Learn Adversarially Robust Models0
A3T: Adversarially Augmented Adversarial Training0
Adversarial Robustness of Link Sign Prediction in Signed Graphs0
DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models0
Scoring Black-Box Models for Adversarial Robustness0
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection0
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising0
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning0
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks0
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness0
Second Order Optimization for Adversarial Robustness and Interpretability0
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models0
Dissecting Local Properties of Adversarial Examples0
Can Language Models be Instructed to Protect Personal Information?0
Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation0
Distilled Agent DQN for Provable Adversarial Robustness0
Distilling Adversarial Robustness Using Heterogeneous Teachers0
Can Implicit Bias Imply Adversarial Robustness?0
Does Adversarial Robustness Really Imply Backdoor Vulnerability?0
SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models0
Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability0
Show:102550
← PrevPage 61 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified