SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 14761500 of 1746 papers

TitleStatusHype
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness ProofsCode0
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data0
On Saliency Maps and Adversarial Robustness0
Achieving robustness in classification using optimal transport with hinge regularizationCode1
Deterministic Gaussian Averaged Neural NetworksCode0
A Self-supervised Approach for Adversarial RobustnessCode1
Adversarial Feature DesensitizationCode0
The Lipschitz Constant of Self-Attention0
Consistency Regularization for Certified Robustness of Smoothed ClassifiersCode1
Robust Face Verification via Disentangled RepresentationsCode0
UFO-BLO: Unbiased First-Order Bilevel Optimization0
Benchmarking Adversarial Robustness on Image ClassificationCode1
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack MethodsCode0
Adversarial Robustness of Deep Convolutional Candlestick LearnerCode1
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning0
Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack0
Revisiting Role of Autoencoders in Adversarial Settings0
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution DataCode1
On Intrinsic Dataset Properties for Adversarial Machine LearningCode1
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural NetworksCode0
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspectiveCode1
Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks0
Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion DetectorsCode1
Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness0
Class-Aware Domain Adaptation for Improving Adversarial Robustness0
Show:102550
← PrevPage 60 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified