SOTAVerified

Adversarial Defense

Competitions with currently unpublished results:

Papers

Showing 150 of 403 papers

TitleStatusHype
Benchmarking Neural Network Robustness to Common Corruptions and PerturbationsCode2
Revisiting Adversarial Training under Long-Tailed DistributionsCode2
Mist: Towards Improved Adversarial Examples for Diffusion ModelsCode2
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language ModelsCode2
Open-set Adversarial DefenseCode1
RAILS: A Robust Adversarial Immune-inspired Learning SystemCode1
Real-world Adversarial Defense against Patch Attacks based on Diffusion ModelCode1
Enhancing Adversarial Robustness via Score-Based OptimizationCode1
Improving Adversarial Robustness via Mutual Information EstimationCode1
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-offCode1
Multitask Learning Strengthens Adversarial RobustnessCode1
On Evaluating Adversarial RobustnessCode1
Perceptual Adversarial Robustness: Defense Against Unseen Threat ModelsCode1
Provably Robust Deep Learning via Adversarially Trained Smoothed ClassifiersCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Collapse-Aware Triplet Decoupling for Adversarially Robust Image RetrievalCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Certified Adversarial Robustness via Randomized SmoothingCode1
DIFFender: Diffusion-Based Adversarial Defense against Patch AttacksCode1
Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion ModelsCode1
Geometric Adversarial Attacks and Defenses on 3D Point CloudsCode1
Guided Adversarial Attack for Evaluating and Enhancing Adversarial DefensesCode1
LiBRe: A Practical Bayesian Approach to Adversarial DetectionCode1
LPF-Defense: 3D Adversarial Defense based on Frequency AnalysisCode1
Among Us: Adversarially Robust Collaborative Perception by ConsensusCode1
Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning MethodCode1
Open-set Adversarial Defense with Clean-Adversarial Mutual LearningCode1
PatchAttack: A Black-box Texture-based Attack with Reinforcement LearningCode1
A Person Re-identification Data Augmentation Method with Adversarial Defense EffectCode1
Perturbation Inactivation Based Adversarial Defense for Face RecognitionCode1
Benchmarking Neural Network Robustness to Common Corruptions and Surface VariationsCode1
Adversarial Training for Free!Code1
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion ModelsCode1
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Boundary thickness and robustness in learning modelsCode1
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial DefenseCode1
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model TrainingCode1
DiffDefense: Defending against Adversarial Attacks via Diffusion ModelsCode1
DISCO: Adversarial Defense with Local Implicit FunctionsCode1
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural NetworksCode1
ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial DefenseCode1
Fast Certified Robust Training with Short WarmupCode1
Information Obfuscation of Graph Neural NetworksCode1
GUARD: Graph Universal Adversarial DefenseCode1
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessCode1
Learnable Boundary Guided Adversarial TrainingCode1
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural NetworksCode1
Decoupled Kullback-Leibler Divergence LossCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1WRN-28-10Accuracy90.03Unverified
2Diffusion ClassifierAccuracy89.85Unverified
3Stochastic-LWTA/PGD/WideResNet-34-10Accuracy84.3Unverified
4Ours (Stochastic-LWTA/PGD/WideResNet-34-5)Accuracy83.4Unverified
5Ours (Stochastic-LWTA/PGD/WideResNet-34-1)Accuracy81.87Unverified
6ResNet18 (TRADES-ANCRA/PGD-40)Accuracy81.7Unverified
7Stochastic-LWTA/PGD/WideResNet-34-5Attack: AutoAttack81.22Unverified
8PCL (against PGD, white box)Accuracy46.7Unverified
#ModelMetricClaimedVerifiedStatus
1SAT-EfficientNet-L1Accuracy58.6Unverified
2LLR-ResNet-152Accuracy47Unverified
3ResNet-152 free-m=4Accuracy36Unverified
4ResNet-101 free-m=4Accuracy34.3Unverified
5ResNet-50 free-m=4Accuracy31.8Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet101Accuracy99.8Unverified
2InceptionV3Accuracy98.6Unverified
3Feature DenoisingAccuracy49.5Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-152 DenoiseAccuracy42.8Unverified
2ResNeXt-101 DenoiseAllAccuracy40.4Unverified
3ResNet-152Accuracy39Unverified
#ModelMetricClaimedVerifiedStatus
1Defense GANAccuracy0.85Unverified
2PuVAEAccuracy0.81Unverified
#ModelMetricClaimedVerifiedStatus
1Feature DenoisingAccuracy50.6Unverified
#ModelMetricClaimedVerifiedStatus
1Auto Encoder-Block Switching defense with GradCAMAccuracy 88.54Unverified