SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 251300 of 1746 papers

TitleStatusHype
Adversarial Robustness against Multiple and Single l_p-Threat Models via Quick Fine-Tuning of Robust ClassifiersCode1
Adversarial Attacks on Graph Classification via Bayesian OptimisationCode1
Adversarial Robustness Against the Union of Multiple Perturbation ModelsCode1
Enhancing Adversarial Robustness via Score-Based OptimizationCode1
Adversarial Robustness as a Prior for Learned RepresentationsCode1
Explainability-Aware One Point Attack for Point Cloud Neural NetworksCode1
Fast and Low-Cost Genomic Foundation Models via Outlier RemovalCode1
Evaluating the Adversarial Robustness of Adaptive Test-time DefensesCode1
Adversarial Attacks on Graph Classifiers via Bayesian OptimisationCode1
ExCon: Explanation-driven Supervised Contrastive Learning for Image ClassificationCode1
Exploring Adversarial Robustness of Deep State Space ModelsCode1
Exploring and Exploiting Decision Boundary Dynamics for Adversarial RobustnessCode1
Adversarial Robustness Against the Union of Multiple Threat ModelsCode1
Ensemble everything everywhere: Multi-scale aggregation for adversarial robustnessCode1
Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated LearningCode1
A Self-supervised Approach for Adversarial RobustnessCode1
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNsCode1
Fixing Data Augmentation to Improve Adversarial RobustnessCode1
Adversarial Attacks on ML Defense Models CompetitionCode1
FlowPure: Continuous Normalizing Flows for Adversarial PurificationCode1
Explainability and Adversarial Robustness for RNNsCode1
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic GradientsCode1
Generalized Real-World Super-Resolution through Adversarial RobustnessCode1
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerCode1
A Unified Framework for Adversarial Attack and Defense in Constrained Feature SpaceCode1
HypMix: Hyperbolic Interpolative Data AugmentationCode1
Benchmarking Adversarial Robustness on Image ClassificationCode1
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
A Pilot Study of Query-Free Adversarial Attack against Stable DiffusionCode1
HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm AttacksCode1
Holistic Deep LearningCode1
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing ThresholdsCode1
Bag of Tricks for Adversarial TrainingCode1
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression TasksCode1
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty DetectionCode1
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?Code1
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-TuningCode1
Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial ExamplesCode1
Adversarial Contrastive Learning via Asymmetric InfoNCECode1
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial PatchesCode1
Improving Adversarial Robustness by Enforcing Local and Global CompactnessCode1
Improving Adversarial Robustness of Masked Autoencoders via Test-time Frequency-domain PromptingCode1
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal ClassificationCode1
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?Code1
Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial DistillationCode1
Improving Fast Minimum-Norm Attacks with Hyperparameter OptimizationCode1
Adversarial Robustness in Graph Neural Networks: A Hamiltonian ApproachCode1
Are socially-aware trajectory prediction models really socially-aware?Code1
Pruning Adversarially Robust Neural Networks without Adversarial ExamplesCode1
Towards Physically Realizable Adversarial Attacks in Embodied Vision NavigationCode1
Show:102550
← PrevPage 6 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified