SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 51100 of 1746 papers

TitleStatusHype
A Self-supervised Approach for Adversarial RobustnessCode1
Certified Training: Small Boxes are All You NeedCode1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Achieving robustness in classification using optimal transport with hinge regularizationCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial AttacksCode1
Adversarial Attack and Defense in Deep RankingCode1
Adversarial Attack and Defense Strategies for Deep Speaker Recognition SystemsCode1
Adversarial Attack on Deep Learning-Based Splice LocalizationCode1
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular DataCode1
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial RemovalCode1
Adversarial Attacks on Graph Classification via Bayesian OptimisationCode1
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuningCode1
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric LearningCode1
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationCode1
Adversarial vulnerability of powerful near out-of-distribution detectionCode1
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World AttacksCode1
CausalAdv: Adversarial Robustness through the Lens of CausalityCode1
Adversarial Contrastive Learning via Asymmetric InfoNCECode1
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative AttacksCode1
A Light Recipe to Train Robust Vision TransformersCode1
Adversarial Robustness of Deep Convolutional Candlestick LearnerCode1
Adversarial Robustness under Long-Tailed DistributionCode1
Adversarial Robustness via Random Projection FiltersCode1
Adversarial Robustness of Bottleneck Injected Deep Neural Networks for Task-Oriented CommunicationCode1
Adversarial Visual Robustness by Causal InterventionCode1
Adversarial Vulnerability of Randomized EnsemblesCode1
AdvRush: Searching for Adversarially Robust Neural ArchitecturesCode1
On the Adversarial Robustness of Vision TransformersCode1
Adversarial Image Color Transformations in Explicit Color Filter SpaceCode1
DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified RobustnessCode1
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
Adversarial Robustness in Graph Neural Networks: A Hamiltonian ApproachCode1
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal ClassificationCode1
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial RobustnessCode1
Adversarial Prompt Tuning for Vision-Language ModelsCode1
Adversarial Robustness Limits via Scaling-Law and Human-Alignment StudiesCode1
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality MetricsCode1
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Adversarially-Aware Robust Object DetectorCode1
Adversarial Robustification via Text-to-Image Diffusion ModelsCode1
Adversarial Attacks on ML Defense Models CompetitionCode1
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression TasksCode1
Adversarially Robust DistillationCode1
Adversarial Robustness Against the Union of Multiple Perturbation ModelsCode1
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNsCode1
Bridging Mode Connectivity in Loss Landscapes and Adversarial RobustnessCode1
CARBEN: Composite Adversarial Robustness BenchmarkCode1
Adversarial Robustness against Multiple and Single l_p-Threat Models via Quick Fine-Tuning of Robust ClassifiersCode1
Show:102550
← PrevPage 2 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified