SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 11011125 of 1746 papers

TitleStatusHype
CalFAT: Calibrated Federated Adversarial Training with Label SkewnessCode0
Level Up with ML Vulnerability Identification: Leveraging Domain Constraints in Feature Space for Robust Android Malware DetectionCode0
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models0
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction0
Functional Network: A Novel Framework for Interpretability of Deep Neural Networks0
Squeeze Training for Adversarial RobustnessCode0
Hierarchical Distribution-Aware Testing of Deep LearningCode0
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection0
Evaluating Membership Inference Through Adversarial RobustnessCode0
Sibylvariant Transformations for Robust Text ClassificationCode0
Can collaborative learning be private, robust and scalable?0
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness0
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNsCode0
CE-based white-box adversarial attacks will not work using super-fitting0
Rethinking Classifier and Adversarial Attack0
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer0
Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples0
Adversarial Fine-tune with Dynamically Regulated Adversary0
On Fragile Features and Batch Normalization in Adversarial Training0
Testing robustness of predictions of trained classifiers against naturally occurring perturbations0
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning0
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks0
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability0
Planting Undetectable Backdoors in Machine Learning Models0
A Simple Approach to Adversarial Robustness in Few-shot Image ClassificationCode0
Show:102550
← PrevPage 45 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified