SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 901950 of 1746 papers

TitleStatusHype
Removing Batch Normalization Boosts Adversarial TrainingCode1
IBP Regularization for Verified Adversarial Robustness via Branch-and-BoundCode0
Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member ModelsCode0
Increasing Confidence in Adversarial Robustness Evaluations0
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective0
Robustness of Explanation Methods for NLP Models0
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic CurriculumCode1
(Certified!!) Adversarial Robustness for Free!Code1
Towards Adversarial Attack on Vision-Language Pre-training ModelsCode1
On the Limitations of Stochastic Pre-processing DefensesCode0
Demystifying the Adversarial Robustness of Random Transformation DefensesCode0
Adversarial Robustness is at Odds with Lazy Training0
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification0
Understanding Robust Overfitting of Adversarial Training and BeyondCode1
Analysis and Extensions of Adversarial Training for Video ClassificationCode0
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial NoisesCode0
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial PruningCode0
Queried Unlabeled Data Improves and Robustifies Class-Incremental LearningCode0
Efficiently Training Low-Curvature Neural NetworksCode0
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINOCode0
Adversarial Vulnerability of Randomized EnsemblesCode1
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of PerturbationsCode0
Defending Adversarial Examples by Negative Correlation EnsembleCode0
Improving the Adversarial Robustness of NLP Models by Information BottleneckCode0
Fundamental Limits in Formal Verification of Message-Passing Neural Networks0
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Wavelet Regularization Benefits Adversarial TrainingCode0
LADDER: Latent Boundary-guided Adversarial TrainingCode0
Building Robust Ensembles via Margin BoostingCode0
Improving Adversarial Robustness by Putting More Regularizations on Less Robust SamplesCode0
A Robust Backpropagation-Free Framework for ImagesCode0
Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection0
The robust way to stack and bag: the local Lipschitz way0
Sequential Bayesian Neural Subnetwork Ensembles0
Level Up with ML Vulnerability Identification: Leveraging Domain Constraints in Feature Space for Robust Android Malware DetectionCode0
CalFAT: Calibrated Federated Adversarial Training with Label SkewnessCode0
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models0
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction0
Functional Network: A Novel Framework for Interpretability of Deep Neural Networks0
Squeeze Training for Adversarial RobustnessCode0
Hierarchical Distribution-Aware Testing of Deep LearningCode0
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection0
Evaluating Membership Inference Through Adversarial RobustnessCode0
Sibylvariant Transformations for Robust Text ClassificationCode0
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness0
Can collaborative learning be private, robust and scalable?0
CE-based white-box adversarial attacks will not work using super-fitting0
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNsCode0
FedNest: Federated Bilevel, Minimax, and Compositional OptimizationCode1
Show:102550
← PrevPage 19 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified