SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 11011150 of 1746 papers

TitleStatusHype
CalFAT: Calibrated Federated Adversarial Training with Label SkewnessCode0
Level Up with ML Vulnerability Identification: Leveraging Domain Constraints in Feature Space for Robust Android Malware DetectionCode0
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models0
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction0
Functional Network: A Novel Framework for Interpretability of Deep Neural Networks0
Squeeze Training for Adversarial RobustnessCode0
Hierarchical Distribution-Aware Testing of Deep LearningCode0
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection0
Evaluating Membership Inference Through Adversarial RobustnessCode0
Sibylvariant Transformations for Robust Text ClassificationCode0
Can collaborative learning be private, robust and scalable?0
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness0
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNsCode0
CE-based white-box adversarial attacks will not work using super-fitting0
Rethinking Classifier and Adversarial Attack0
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer0
Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples0
Adversarial Fine-tune with Dynamically Regulated Adversary0
On Fragile Features and Batch Normalization in Adversarial Training0
Testing robustness of predictions of trained classifiers against naturally occurring perturbations0
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning0
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks0
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability0
Planting Undetectable Backdoors in Machine Learning Models0
A Simple Approach to Adversarial Robustness in Few-shot Image ClassificationCode0
Evaluating the Adversarial Robustness for Fourier Neural Operators0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
Adversarial Robustness through the Lens of Convolutional FiltersCode0
SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task LearningCode0
Scalable Whitebox Attacks on Tree-based Models0
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes0
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial TrainingCode0
Provable Adversarial Robustness for Fractional Lp Threat ModelsCode0
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness0
Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis0
On the benefits of knowledge distillation for adversarial robustness0
Perception Over Time: Temporal Dynamics for Robust Image Understanding0
Hybrid Deep Learning Model using SPCAGAN Augmentation for Insider Threat Analysis0
Adversarial Robustness of Neural-Statistical Features in Detection of Generative TransformersCode0
Neuro-Symbolic Verification of Deep Neural NetworksCode0
Global-Local Regularization Via Distributional RobustnessCode0
Adversarial robustness of sparse local Lipschitz predictors0
Understanding Adversarial Robustness from Feature Maps of Convolutional LayersCode0
Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation ScalingCode0
Transferring Adversarial Robustness Through Robust Representation MatchingCode0
Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness0
Exploring Adversarially Robust Training for Unsupervised Domain AdaptationCode0
Mitigating Closed-model Adversarial Examples with Bayesian Neural Modeling for Enhanced End-to-End Speech Recognition0
Unreasonable Effectiveness of Last Hidden Layer Activations for Adversarial Robustness0
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection0
Show:102550
← PrevPage 23 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified