SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 251275 of 1746 papers

TitleStatusHype
Make Sure You're Unsure: A Framework for Verifying Probabilistic SpecificationsCode1
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature SelectionCode1
Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational InferenceCode1
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuningCode1
Composite Adversarial AttacksCode1
Using Feature Alignment Can Improve Clean Average Precision and Adversarial Robustness in Object DetectionCode1
On the Trade-off between Adversarial and Backdoor RobustnessCode1
Regularization with Latent Space Virtual Adversarial TrainingCode1
A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated LearningCode1
SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert PatcherCode1
Adversarial Image Color Transformations in Explicit Color Filter SpaceCode1
Robust Pre-Training by Adversarial Contrastive LearningCode1
RobustBench: a standardized adversarial robustness benchmarkCode1
Shape-Texture Debiased Neural Network TrainingCode1
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial ExamplesCode1
Bag of Tricks for Adversarial TrainingCode1
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal MixupCode1
Adversarial Attack and Defense Strategies for Deep Speaker Recognition SystemsCode1
Neural Networks with Recurrent Generative FeedbackCode1
Certifiably Adversarially Robust Detection of Out-of-Distribution DataCode1
Multitask Learning Strengthens Adversarial RobustnessCode1
Understanding Object Detection Through An Adversarial LensCode1
Improving Adversarial Robustness by Enforcing Local and Global CompactnessCode1
RobFR: Benchmarking Adversarial Robustness on Face RecognitionCode1
Proper Network Interpretability Helps Adversarial Robustness in ClassificationCode1
Show:102550
← PrevPage 11 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified