SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 451500 of 1746 papers

TitleStatusHype
Global-Local Regularization Via Distributional RobustnessCode0
Biologically Inspired Mechanisms for Adversarial RobustnessCode0
Gradient-Free Adversarial Attacks for Bayesian Neural NetworksCode0
Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning ApproachCode0
Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR imagesCode0
Give me a hint: Can LLMs take a hint to solve math problems?Code0
Generating Adversarial Examples with Adversarial NetworksCode0
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial RobustnessCode0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Simple Post-Training Robustness Using Test Time Augmentations and Random ForestCode0
Enhancing Robustness in Incremental Learning with Adversarial TrainingCode0
A Deep Dive into Adversarial Robustness in Zero-Shot LearningCode0
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial DefenseCode0
Generative Max-Mahalanobis Classifiers for Image Classification, Generation and MoreCode0
Clustering Effect of (Linearized) Adversarial Robust ModelsCode0
Squeeze Training for Adversarial RobustnessCode0
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative ModelsCode0
Beyond One-Hot-Encoding: Injecting Semantics to Drive Image ClassifiersCode0
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual ExplanationsCode0
Gated Information Bottleneck for Generalization in Sequential EnvironmentsCode0
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary TasksCode0
Adversarial Robustness of VAEs across Intersectional SubgroupsCode0
Adversarial Attack Generation Empowered by Min-Max OptimizationCode0
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified RobustnessCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Confidence Elicitation: A New Attack Vector for Large Language ModelsCode0
Learning Robust 3D Representation from CLIP via Dual DenoisingCode0
Learning Robust and Privacy-Preserving Representations via Information TheoryCode0
GridMix: Strong regularization through local context mappingCode0
Adversarial Robustness of Supervised Sparse CodingCode0
Finding Biological Plausibility for Adversarially Robust Features via Metameric TasksCode0
Benchmarking Robust Self-Supervised Learning Across Diverse Downstream TasksCode0
FI-ODE: Certifiably Robust Forward Invariance in Neural ODEsCode0
Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-JudgeCode0
Adversarially Robust Decision TransformerCode0
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated GradientsCode0
Improving Robustness with Adaptive Weight DecayCode0
Feature Statistics with Uncertainty Help Adversarial RobustnessCode0
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Metric Learning for Adversarial RobustnessCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
MIMIR: Masked Image Modeling for Mutual Information-based Adversarial RobustnessCode0
Bayesian Inference with Certifiable Adversarial RobustnessCode0
DAD++: Improved Data-free Test Time Adversarial DefenseCode0
Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability: A Non-Robust Feature PerspectiveCode0
Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language UnderstandingCode0
Fast Adversarial Training with Smooth ConvergenceCode0
Data-free Defense of Black Box Models Against Adversarial AttacksCode0
Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary SeminormsCode0
Feature Denoising for Improving Adversarial RobustnessCode0
Show:102550
← PrevPage 10 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified