SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 301350 of 1746 papers

TitleStatusHype
Cauchy-Schwarz Divergence Information Bottleneck for RegressionCode1
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution DataCode1
(Certified!!) Adversarial Robustness for Free!Code1
CFA: Class-wise Calibrated Fair Adversarial TrainingCode1
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-offCode1
Neural Networks with Recurrent Generative FeedbackCode1
A Self-supervised Approach for Adversarial RobustnessCode1
Certified Adversarial Robustness via Randomized SmoothingCode1
Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan ScriptCode1
Certified Training: Small Boxes are All You NeedCode1
IRAD: Implicit Representation-driven Image Resampling against Adversarial AttacksCode1
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature SelectionCode1
Part-Based Models Improve Adversarial RobustnessCode1
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIPCode1
Adversarial Robustness of Bottleneck Injected Deep Neural Networks for Task-Oriented CommunicationCode1
On the Adversarial Robustness of Camera-based 3D Object DetectionCode1
On the Duality Between Sharpness-Aware Minimization and Adversarial TrainingCode1
On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous DrivingCode1
Adversarial Robustness of Deep Convolutional Candlestick LearnerCode1
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular DataCode1
On the Adversarial Robustness of Vision TransformersCode1
TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-casesCode1
Consistency Regularization for Adversarial RobustnessCode1
PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo AnomaliesCode1
An Adversarial Robustness Perspective on the Topology of Neural NetworksCode0
Feature Statistics with Uncertainty Help Adversarial RobustnessCode0
Analysis and Extensions of Adversarial Training for Video ClassificationCode0
An Adaptive View of Adversarial Robustness from Test-time Smoothing DefenseCode0
Feature Denoising for Improving Adversarial RobustnessCode0
Adversarial Attacks on Data AttributionCode0
Fast Adversarial Training with Smooth ConvergenceCode0
Adversarial Robust Memory-Based Continual LearnerCode0
A Closer Look at the Adversarial Robustness of Deep Equilibrium ModelsCode0
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin AttackCode0
Adversarially Robust Decision TransformerCode0
Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image DetectorsCode0
FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation MethodsCode0
FaiR-N: Fair and Robust Neural Networks for Structured DataCode0
Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary SeminormsCode0
A Hierarchical Assessment of Adversarial SeverityCode0
Scaling Trends in Language Model RobustnessCode0
Exploring Adversarial Robustness of Deep Metric LearningCode0
A Closer Look at Memorization in Deep NetworksCode0
Exploring Adversarially Robust Training for Unsupervised Domain AdaptationCode0
Expressive Losses for Verified Robustness via Convex CombinationsCode0
Explaining Adversarial Robustness of Neural Networks from Clustering Effect PerspectiveCode0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Explaining Adversarial Vulnerability with a Data Sparsity HypothesisCode0
Adversarial Neural Pruning with Latent Vulnerability SuppressionCode0
Understanding the Robustness of Graph Neural Networks against Adversarial AttacksCode0
Show:102550
← PrevPage 7 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified