SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 301350 of 1746 papers

TitleStatusHype
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationCode1
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacksCode1
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessCode1
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerCode1
Learning Adversarially Robust Representations via Worst-Case Mutual Information MaximizationCode1
Hold me tight! Influence of discriminative features on deep network boundariesCode1
Adversarial Robustness for CodeCode1
Random Smoothing Might be Unable to Certify _ Robustness for High-Dimensional ImagesCode1
Renofeation: A Simple Transfer Learning Method for Improved Adversarial RobustnessCode1
Towards Sharper First-Order Adversary with Quantized GradientsCode1
Adversarial Robustness Against the Union of Multiple Threat ModelsCode1
Explainability and Adversarial Robustness for RNNsCode1
Universal Adversarial Robustness of Texture and Shape-Biased ModelsCode1
Adversarial Robustness Against the Union of Multiple Perturbation ModelsCode1
MNIST-C: A Robustness Benchmark for Computer VisionCode1
Adversarial Robustness as a Prior for Learned RepresentationsCode1
Adversarially Robust DistillationCode1
Wasserstein Adversarial Examples via Projected Sinkhorn IterationsCode1
On Evaluating Adversarial RobustnessCode1
Certified Adversarial Robustness via Randomized SmoothingCode1
Improving Adversarial Robustness via Promoting Ensemble DiversityCode1
Theoretically Principled Trade-off between Robustness and AccuracyCode1
Robustness May Be at Odds with AccuracyCode1
Towards Deep Learning Models Resistant to Adversarial AttacksCode1
Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix ApproachCode0
Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking0
Evaluating the Evaluators: Trust in Adversarial Robustness Tests0
Rectifying Adversarial Sample with Low Entropy Prior for Test-Time Defense0
Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models0
PRISON: Unmasking the Criminal Potential of Large Language Models0
NAP-Tuning: Neural Augmented Prompt Tuning for Adversarially Robust Vision-Language Models0
Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs0
Canonical Latent Representations in Conditional Diffusion Models0
Towards Class-wise Fair Adversarial Training via Anti-Bias Soft Label DistillationCode0
The interplay of robustness and generalization in quantum machine learningCode0
ProARD: progressive adversarial robustness distillation: provide wide range of robust studentsCode0
Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model ReliabilityCode0
RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image DetectorsCode0
Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning0
Dynamic Epsilon Scheduling: A Multi-Factor Adaptive Perturbation Budget for Adversarial Training0
SafeGenes: Evaluating the Adversarial Robustness of Genomic Foundation Models0
Speech Unlearning0
Model Unlearning via Sparse Autoencoder Subspace Guided Projections0
A Flat Minima Perspective on Understanding Augmentations and Model Robustness0
On the Scaling of Robustness and Effectiveness in Dense Retrieval0
The Butterfly Effect in Pathology: Exploring Security in Pathology Foundation ModelsCode0
How Do Diffusion Models Improve Adversarial Robustness?0
Are classical deep neural networks weakly adversarially robust?0
Erasing Concepts, Steering Generations: A Comprehensive Survey of Concept Suppression0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Show:102550
← PrevPage 7 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified