SOTAVerified

Adversarial Defense

Competitions with currently unpublished results:

Papers

Showing 251300 of 403 papers

TitleStatusHype
Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method0
Training Robust Deep Neural Networks via Adversarial Noise Propagation0
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders0
TREATED:Towards Universal Defense against Textual Adversarial Attacks0
Tricking Adversarial Attacks To Fail0
Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness0
Universal Learning Approach for Adversarial Defense0
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series0
WaveTransform: Crafting Adversarial Examples via Input Decomposition0
Weakly Supervised Invariant Representation Learning Via Disentangling Known and Unknown Nuisance Factors0
Mitigating Adversarial Effects Through RandomizationCode0
SMUG: Towards robust MRI reconstruction by smoothed unrollingCode0
Modeling Adversarial Noise for Adversarial TrainingCode0
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEsCode0
Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial RobustnessCode0
Natural Language Adversarial Defense through Synonym EncodingCode0
Neural Fingerprints for Adversarial Attack DetectionCode0
NOMARO: Defending against Adversarial Attacks by NOMA-Inspired Reconstruction OperationCode0
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial ExamplesCode0
CAAD 2018: Generating Transferable Adversarial ExamplesCode0
An Analysis of Robustness of Non-Lipschitz NetworksCode0
Stochastic Activation Pruning for Robust Adversarial DefenseCode0
Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix ApproachCode0
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial DefenseCode0
mFI-PSO: A Flexible and Effective Method in Adversarial Image Generation for Deep Neural NetworksCode0
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial AttackCode0
PaRoT: A Practical Framework for Robust Deep Neural Network TrainingCode0
Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial DefenseCode0
Robustness for Non-Parametric Classification: A Generic Attack and DefenseCode0
LSA: Modeling Aspect Sentiment Coherency via Local Sentiment AggregationCode0
A Closer Look at the Adversarial Robustness of Deep Equilibrium ModelsCode0
A Simple and Yet Fairly Effective Defense for Graph Neural NetworksCode0
Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual TrackingCode0
Adversarial Defense via Learning to Generate Diverse AttacksCode0
PPD: Permutation Phase Defense Against Adversarial Examples in Deep LearningCode0
Towards Unified Robustness Against Both Backdoor and Adversarial AttacksCode0
Privacy Risks of Securing Machine Learning Models against Adversarial ExamplesCode0
Super-Efficient Super Resolution for Fast Adversarial Defense at the EdgeCode0
Provably Cost-Sensitive Adversarial Defense via Randomized SmoothingCode0
Are Generative Classifiers More Robust to Adversarial Attacks?Code0
Delving into Transferable Adversarial Examples and Black-box AttacksCode0
VideoPure: Diffusion-based Adversarial Purification for Video RecognitionCode0
Detection and Defense of Unlearnable ExamplesCode0
Detection of Adversarial Examples in NLP: Benchmark and Baseline via Robust Density EstimationCode0
Detection of Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density EstimationCode0
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density EstimationCode0
Detection of Word Adversarial Examples in NLP: Benchmark and Baseline via Robust Density EstimationCode0
Defensive Few-shot LearningCode0
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative ModelsCode0
DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative DenoisingCode0
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1WRN-28-10Accuracy90.03Unverified
2Diffusion ClassifierAccuracy89.85Unverified
3Stochastic-LWTA/PGD/WideResNet-34-10Accuracy84.3Unverified
4Ours (Stochastic-LWTA/PGD/WideResNet-34-5)Accuracy83.4Unverified
5Ours (Stochastic-LWTA/PGD/WideResNet-34-1)Accuracy81.87Unverified
6ResNet18 (TRADES-ANCRA/PGD-40)Accuracy81.7Unverified
7Stochastic-LWTA/PGD/WideResNet-34-5Attack: AutoAttack81.22Unverified
8PCL (against PGD, white box)Accuracy46.7Unverified
#ModelMetricClaimedVerifiedStatus
1SAT-EfficientNet-L1Accuracy58.6Unverified
2LLR-ResNet-152Accuracy47Unverified
3ResNet-152 free-m=4Accuracy36Unverified
4ResNet-101 free-m=4Accuracy34.3Unverified
5ResNet-50 free-m=4Accuracy31.8Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet101Accuracy99.8Unverified
2InceptionV3Accuracy98.6Unverified
3Feature DenoisingAccuracy49.5Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-152 DenoiseAccuracy42.8Unverified
2ResNeXt-101 DenoiseAllAccuracy40.4Unverified
3ResNet-152Accuracy39Unverified
#ModelMetricClaimedVerifiedStatus
1Defense GANAccuracy0.85Unverified
2PuVAEAccuracy0.81Unverified
#ModelMetricClaimedVerifiedStatus
1Feature DenoisingAccuracy50.6Unverified
#ModelMetricClaimedVerifiedStatus
1Auto Encoder-Block Switching defense with GradCAMAccuracy 88.54Unverified