SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 13011325 of 1746 papers

TitleStatusHype
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective0
Towards Robust Vision Transformer via Masked Adaptive Ensemble0
Reframing Neural Networks: Deep Structure in Overcomplete Representations0
A Curious Case of Remarkable Resilience to Gradient Attacks via Fully Convolutional and Differentiable Front End with a Skip Connection0
A Robust Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via (De)Randomized Smoothing0
Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness0
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training0
Relating Adversarially Robust Generalization to Flat Minima0
Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence0
Relaxing Graph Transformers for Adversarial Attacks0
Releasing Inequality Phenomena in L_-Adversarial Training via Input Gradient Distillation0
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications0
Reliable and Efficient Evaluation of Adversarial Robustness for Deep Hashing-Based Retrieval0
Adversarial Robustness May Be at Odds With Simplicity0
Towards Stable and Robust AdderNets0
Removing Adversarial Noise in Class Activation Feature Space0
Adversarial Robustness is at Odds with Lazy Training0
Removing Out-of-Distribution Data Improves Adversarial Robustness0
Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning0
XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars0
Rerouting LLM Routers0
Residual Error: a New Performance Measure for Adversarial Robustness0
Resilience to Multiple Attacks via Adversarially Trained MIMO Ensembles0
Revisiting and Advancing Adversarial Training Through A Simple Baseline0
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review0
Show:102550
← PrevPage 53 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified