SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 10761100 of 1746 papers

TitleStatusHype
Robustness of Explanation Methods for NLP Models0
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective0
On the Limitations of Stochastic Pre-processing DefensesCode0
Demystifying the Adversarial Robustness of Random Transformation DefensesCode0
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification0
Adversarial Robustness is at Odds with Lazy Training0
Analysis and Extensions of Adversarial Training for Video ClassificationCode0
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial NoisesCode0
Queried Unlabeled Data Improves and Robustifies Class-Incremental LearningCode0
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial PruningCode0
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINOCode0
Efficiently Training Low-Curvature Neural NetworksCode0
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of PerturbationsCode0
Improving the Adversarial Robustness of NLP Models by Information BottleneckCode0
Defending Adversarial Examples by Negative Correlation EnsembleCode0
Fundamental Limits in Formal Verification of Message-Passing Neural Networks0
Wavelet Regularization Benefits Adversarial TrainingCode0
LADDER: Latent Boundary-guided Adversarial TrainingCode0
Improving Adversarial Robustness by Putting More Regularizations on Less Robust SamplesCode0
Building Robust Ensembles via Margin BoostingCode0
A Robust Backpropagation-Free Framework for ImagesCode0
Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection0
The robust way to stack and bag: the local Lipschitz way0
Sequential Bayesian Neural Subnetwork Ensembles0
Show:102550
← PrevPage 44 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified