SOTAVerified

Adversarial Defense

Competitions with currently unpublished results:

Papers

Showing 150 of 403 papers

TitleStatusHype
Revisiting Adversarial Training under Long-Tailed DistributionsCode2
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language ModelsCode2
Mist: Towards Improved Adversarial Examples for Diffusion ModelsCode2
Benchmarking Neural Network Robustness to Common Corruptions and PerturbationsCode2
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial DefenseCode1
Real-world Adversarial Defense against Patch Attacks based on Diffusion ModelCode1
Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion ModelsCode1
Collapse-Aware Triplet Decoupling for Adversarially Robust Image RetrievalCode1
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model TrainingCode1
DiffDefense: Defending against Adversarial Attacks via Diffusion ModelsCode1
Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion ModelsCode1
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion ModelsCode1
Enhancing Adversarial Robustness via Score-Based OptimizationCode1
DIFFender: Diffusion-Based Adversarial Defense against Patch AttacksCode1
Robust Classification via a Single Diffusion ModelCode1
Decoupled Kullback-Leibler Divergence LossCode1
Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified _p AttacksCode1
Among Us: Adversarially Robust Collaborative Perception by ConsensusCode1
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven OptimizationCode1
DISCO: Adversarial Defense with Local Implicit FunctionsCode1
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural NetworksCode1
Scaling Adversarial Training to Large Perturbation BoundsCode1
Visual Prompting for Adversarial RobustnessCode1
Improving Adversarial Robustness via Mutual Information EstimationCode1
Threat Model-Agnostic Adversarial Defense using Diffusion ModelsCode1
Perturbation Inactivation Based Adversarial Defense for Face RecognitionCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social NetworksCode1
GUARD: Graph Universal Adversarial DefenseCode1
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
LPF-Defense: 3D Adversarial Defense based on Frequency AnalysisCode1
Open-set Adversarial Defense with Clean-Adversarial Mutual LearningCode1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) frameworkCode1
Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level OptimizationCode1
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch DetectionCode1
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial RobustnessCode1
Person Re-identification Method Based on Color Attack and Joint DefenceCode1
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural NetworksCode1
RAILS: A Robust Adversarial Immune-inspired Learning SystemCode1
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-offCode1
The art of defense: letting networks fool the attackerCode1
Fast Certified Robust Training with Short WarmupCode1
LiBRe: A Practical Bayesian Approach to Adversarial DetectionCode1
Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution HeterogeneityCode1
Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning MethodCode1
A Person Re-identification Data Augmentation Method with Adversarial Defense EffectCode1
Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational InferenceCode1
Geometric Adversarial Attacks and Defenses on 3D Point CloudsCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1WRN-28-10Accuracy90.03Unverified
2Diffusion ClassifierAccuracy89.85Unverified
3Stochastic-LWTA/PGD/WideResNet-34-10Accuracy84.3Unverified
4Ours (Stochastic-LWTA/PGD/WideResNet-34-5)Accuracy83.4Unverified
5Ours (Stochastic-LWTA/PGD/WideResNet-34-1)Accuracy81.87Unverified
6ResNet18 (TRADES-ANCRA/PGD-40)Accuracy81.7Unverified
7Stochastic-LWTA/PGD/WideResNet-34-5Attack: AutoAttack81.22Unverified
8PCL (against PGD, white box)Accuracy46.7Unverified
#ModelMetricClaimedVerifiedStatus
1SAT-EfficientNet-L1Accuracy58.6Unverified
2LLR-ResNet-152Accuracy47Unverified
3ResNet-152 free-m=4Accuracy36Unverified
4ResNet-101 free-m=4Accuracy34.3Unverified
5ResNet-50 free-m=4Accuracy31.8Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet101Accuracy99.8Unverified
2InceptionV3Accuracy98.6Unverified
3Feature DenoisingAccuracy49.5Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-152 DenoiseAccuracy42.8Unverified
2ResNeXt-101 DenoiseAllAccuracy40.4Unverified
3ResNet-152Accuracy39Unverified
#ModelMetricClaimedVerifiedStatus
1Defense GANAccuracy0.85Unverified
2PuVAEAccuracy0.81Unverified
#ModelMetricClaimedVerifiedStatus
1Feature DenoisingAccuracy50.6Unverified
#ModelMetricClaimedVerifiedStatus
1Auto Encoder-Block Switching defense with GradCAMAccuracy 88.54Unverified