SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 201250 of 1746 papers

TitleStatusHype
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language ModelsCode1
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?Code1
HypMix: Hyperbolic Interpolative Data AugmentationCode1
Holistic Deep LearningCode1
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized NetworksCode1
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal ClassificationCode1
Improving Robustness using Generated DataCode1
Adversarial Attacks on ML Defense Models CompetitionCode1
Explainability-Aware One Point Attack for Point Cloud Neural NetworksCode1
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural NetworksCode1
Exploring Architectural Ingredients of Adversarially Robust Deep Neural NetworksCode1
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNsCode1
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language UnderstandingCode1
RobustART: Benchmarking Robustness on Architecture Design and Training TechniquesCode1
Generalized Real-World Super-Resolution through Adversarial RobustnessCode1
Are socially-aware trajectory prediction models really socially-aware?Code1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student BetterCode1
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric LearningCode1
Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100Code1
AdvRush: Searching for Adversarially Robust Neural ArchitecturesCode1
Enhancing Adversarial Robustness via Test-time Transformation EnsemblingCode1
WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image ClassificationCode1
Clipped Hyperbolic Classifiers Are Super-Hyperbolic ClassifiersCode1
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic GradientsCode1
RAILS: A Robust Adversarial Immune-inspired Learning SystemCode1
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-offCode1
Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated LearningCode1
Adversarial Visual Robustness by Causal InterventionCode1
Probabilistic Margins for Instance Reweighting in Adversarial TrainingCode1
CausalAdv: Adversarial Robustness through the Lens of CausalityCode1
Reliable Adversarial Distillation with Unreliable TeachersCode1
Adversarial Attack and Defense in Deep RankingCode1
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial RobustnessCode1
Adversarial Robustness against Multiple and Single l_p-Threat Models via Quick Fine-Tuning of Robust ClassifiersCode1
Skew Orthogonal ConvolutionsCode1
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural NetworksCode1
Random Noise Defense Against Query-Based Black-Box AttacksCode1
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?Code1
Orthogonalizing Convolutional Layers with the Cayley TransformCode1
Adversarial Robustness under Long-Tailed DistributionCode1
On the Adversarial Robustness of Vision TransformersCode1
Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust ExplorationCode1
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and BeyondCode1
Generating Adversarial Computer Programs using Optimized ObfuscationsCode1
A Unified Game-Theoretic Interpretation of Adversarial RobustnessCode1
Improving Adversarial Robustness via Channel-wise Activation SuppressingCode1
Consistency Regularization for Adversarial RobustnessCode1
Fixing Data Augmentation to Improve Adversarial RobustnessCode1
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-LearningCode1
Show:102550
← PrevPage 5 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified