SOTAVerified

Adversarial Defense

Competitions with currently unpublished results:

Papers

Showing 2650 of 403 papers

TitleStatusHype
Threat Model-Agnostic Adversarial Defense using Diffusion ModelsCode1
Perturbation Inactivation Based Adversarial Defense for Face RecognitionCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social NetworksCode1
GUARD: Graph Universal Adversarial DefenseCode1
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
LPF-Defense: 3D Adversarial Defense based on Frequency AnalysisCode1
Open-set Adversarial Defense with Clean-Adversarial Mutual LearningCode1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) frameworkCode1
Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level OptimizationCode1
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch DetectionCode1
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial RobustnessCode1
Person Re-identification Method Based on Color Attack and Joint DefenceCode1
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural NetworksCode1
RAILS: A Robust Adversarial Immune-inspired Learning SystemCode1
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-offCode1
The art of defense: letting networks fool the attackerCode1
Fast Certified Robust Training with Short WarmupCode1
LiBRe: A Practical Bayesian Approach to Adversarial DetectionCode1
Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution HeterogeneityCode1
Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning MethodCode1
A Person Re-identification Data Augmentation Method with Adversarial Defense EffectCode1
Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational InferenceCode1
Geometric Adversarial Attacks and Defenses on 3D Point CloudsCode1
Show:102550
← PrevPage 2 of 17Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1WRN-28-10Accuracy90.03Unverified
2Diffusion ClassifierAccuracy89.85Unverified
3Stochastic-LWTA/PGD/WideResNet-34-10Accuracy84.3Unverified
4Ours (Stochastic-LWTA/PGD/WideResNet-34-5)Accuracy83.4Unverified
5Ours (Stochastic-LWTA/PGD/WideResNet-34-1)Accuracy81.87Unverified
6ResNet18 (TRADES-ANCRA/PGD-40)Accuracy81.7Unverified
7Stochastic-LWTA/PGD/WideResNet-34-5Attack: AutoAttack81.22Unverified
8PCL (against PGD, white box)Accuracy46.7Unverified
#ModelMetricClaimedVerifiedStatus
1SAT-EfficientNet-L1Accuracy58.6Unverified
2LLR-ResNet-152Accuracy47Unverified
3ResNet-152 free-m=4Accuracy36Unverified
4ResNet-101 free-m=4Accuracy34.3Unverified
5ResNet-50 free-m=4Accuracy31.8Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet101Accuracy99.8Unverified
2InceptionV3Accuracy98.6Unverified
3Feature DenoisingAccuracy49.5Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-152 DenoiseAccuracy42.8Unverified
2ResNeXt-101 DenoiseAllAccuracy40.4Unverified
3ResNet-152Accuracy39Unverified
#ModelMetricClaimedVerifiedStatus
1Defense GANAccuracy0.85Unverified
2PuVAEAccuracy0.81Unverified
#ModelMetricClaimedVerifiedStatus
1Feature DenoisingAccuracy50.6Unverified
#ModelMetricClaimedVerifiedStatus
1Auto Encoder-Block Switching defense with GradCAMAccuracy 88.54Unverified