SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 13011325 of 1746 papers

TitleStatusHype
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Scaling Trends in Language Model RobustnessCode0
Exploring Adversarial Robustness of Deep Metric LearningCode0
Rethinking Robust Contrastive Learning from the Adversarial PerspectiveCode0
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of PerturbationsCode0
Scaleable input gradient regularization for adversarial robustnessCode0
Analysis and Extensions of Adversarial Training for Video ClassificationCode0
Feature Denoising for Improving Adversarial RobustnessCode0
Scaling Compute Is Not All You Need for Adversarial RobustnessCode0
CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius MaximizationCode0
ScAR: Scaling Adversarial Robustness for LiDAR Object DetectionCode0
Exploring Adversarially Robust Training for Unsupervised Domain AdaptationCode0
Feature Statistics with Uncertainty Help Adversarial RobustnessCode0
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual InformationCode0
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINOCode0
An Adversarial Robustness Perspective on the Topology of Neural NetworksCode0
A Closer Look at Memorization in Deep NetworksCode0
Certified Adversarial Robustness with Additive NoiseCode0
Adversarial robustness of amortized Bayesian inferenceCode0
Explaining Adversarial Vulnerability with a Data Sparsity HypothesisCode0
Adversarial Robustness vs Model Compression, or Both?Code0
CalFAT: Calibrated Federated Adversarial Training with Label SkewnessCode0
Finding Biological Plausibility for Adversarially Robust Features via Metameric TasksCode0
Understanding Adversarial Robustness Against On-manifold Adversarial ExamplesCode0
Adversarial Robustness Analysis of Vision-Language Models in Medical Image SegmentationCode0
Show:102550
← PrevPage 53 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified