SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 901925 of 1746 papers

TitleStatusHype
Removing Batch Normalization Boosts Adversarial TrainingCode1
IBP Regularization for Verified Adversarial Robustness via Branch-and-BoundCode0
Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member ModelsCode0
Increasing Confidence in Adversarial Robustness Evaluations0
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective0
Robustness of Explanation Methods for NLP Models0
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic CurriculumCode1
(Certified!!) Adversarial Robustness for Free!Code1
Towards Adversarial Attack on Vision-Language Pre-training ModelsCode1
On the Limitations of Stochastic Pre-processing DefensesCode0
Demystifying the Adversarial Robustness of Random Transformation DefensesCode0
Adversarial Robustness is at Odds with Lazy Training0
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification0
Understanding Robust Overfitting of Adversarial Training and BeyondCode1
Analysis and Extensions of Adversarial Training for Video ClassificationCode0
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial NoisesCode0
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial PruningCode0
Queried Unlabeled Data Improves and Robustifies Class-Incremental LearningCode0
Efficiently Training Low-Curvature Neural NetworksCode0
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINOCode0
Adversarial Vulnerability of Randomized EnsemblesCode1
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of PerturbationsCode0
Defending Adversarial Examples by Negative Correlation EnsembleCode0
Improving the Adversarial Robustness of NLP Models by Information BottleneckCode0
Show:102550
← PrevPage 37 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified