SOTAVerified

Adversarial Defense

Competitions with currently unpublished results:

Papers

Showing 51100 of 403 papers

TitleStatusHype
Certified Adversarial Robustness via Randomized SmoothingCode1
Adversarial Training for Free!Code1
Boundary thickness and robustness in learning modelsCode1
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven OptimizationCode1
Enhancing Adversarial Robustness via Score-Based OptimizationCode1
Stereopagnosia: Fooling Stereo Networks with Adversarial PerturbationsCode1
Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion ModelsCode1
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social NetworksCode1
Geometric Adversarial Attacks and Defenses on 3D Point CloudsCode1
Decoupled Kullback-Leibler Divergence LossCode1
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) frameworkCode1
ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial DefenseCode1
Perturbation Inactivation Based Adversarial Defense for Face RecognitionCode1
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial DefenseCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Among Us: Adversarially Robust Collaborative Perception by ConsensusCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
GUARD: Graph Universal Adversarial DefenseCode1
Guided Adversarial Attack for Evaluating and Enhancing Adversarial DefensesCode1
Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning MethodCode1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessCode1
Learnable Boundary Guided Adversarial TrainingCode1
A Person Re-identification Data Augmentation Method with Adversarial Defense EffectCode1
Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational InferenceCode1
On Evaluating Adversarial RobustnessCode1
Adversarial Defense of Image Classification Using a Variational Auto-EncoderCode0
Erasing, Transforming, and Noising Defense Network for Occluded Person Re-IdentificationCode0
A Game-Based Approximate Verification of Deep Neural Networks with Provable GuaranteesCode0
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural NetworkCode0
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust AccuraciesCode0
Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural NetworksCode0
A Few Large Shifts: Layer-Inconsistency Based Minimal Overhead Adversarial Example DetectionCode0
Efficient Formal Safety Analysis of Neural NetworksCode0
A Closer Look at the Adversarial Robustness of Deep Equilibrium ModelsCode0
AdvFAS: A robust face anti-spoofing framework against adversarial examplesCode0
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density ModelingCode0
Exploring Adversarially Robust Training for Unsupervised Domain AdaptationCode0
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorchCode0
Adversarial Defense by Suppressing High-frequency ComponentsCode0
ADAPT to Robustify Prompt Tuning Vision TransformersCode0
Adversarial Robustness via Fisher-Rao RegularizationCode0
Adversarial Defense by Stratified Convolutional Sparse CodingCode0
Adversarial Defense by Restricting the Hidden Space of Deep Neural NetworksCode0
CAAD 2018: Generating Transferable Adversarial ExamplesCode0
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated GradientsCode0
DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative DenoisingCode0
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep LearningCode0
Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix ApproachCode0
Show:102550
← PrevPage 2 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1WRN-28-10Accuracy90.03Unverified
2Diffusion ClassifierAccuracy89.85Unverified
3Stochastic-LWTA/PGD/WideResNet-34-10Accuracy84.3Unverified
4Ours (Stochastic-LWTA/PGD/WideResNet-34-5)Accuracy83.4Unverified
5Ours (Stochastic-LWTA/PGD/WideResNet-34-1)Accuracy81.87Unverified
6ResNet18 (TRADES-ANCRA/PGD-40)Accuracy81.7Unverified
7Stochastic-LWTA/PGD/WideResNet-34-5Attack: AutoAttack81.22Unverified
8PCL (against PGD, white box)Accuracy46.7Unverified
#ModelMetricClaimedVerifiedStatus
1SAT-EfficientNet-L1Accuracy58.6Unverified
2LLR-ResNet-152Accuracy47Unverified
3ResNet-152 free-m=4Accuracy36Unverified
4ResNet-101 free-m=4Accuracy34.3Unverified
5ResNet-50 free-m=4Accuracy31.8Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet101Accuracy99.8Unverified
2InceptionV3Accuracy98.6Unverified
3Feature DenoisingAccuracy49.5Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-152 DenoiseAccuracy42.8Unverified
2ResNeXt-101 DenoiseAllAccuracy40.4Unverified
3ResNet-152Accuracy39Unverified
#ModelMetricClaimedVerifiedStatus
1Defense GANAccuracy0.85Unverified
2PuVAEAccuracy0.81Unverified
#ModelMetricClaimedVerifiedStatus
1Feature DenoisingAccuracy50.6Unverified
#ModelMetricClaimedVerifiedStatus
1Auto Encoder-Block Switching defense with GradCAMAccuracy 88.54Unverified