SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 10511075 of 1808 papers

TitleStatusHype
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping0
Boosting Adversarial Transferability of MLP-Mixer0
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity0
How Sampling Impacts the Robustness of Stochastic Neural Networks0
Enhancing the Transferability via Feature-Momentum Adversarial Attack0
A Mask-Based Adversarial Defense Scheme0
Testing robustness of predictions of trained classifiers against naturally occurring perturbations0
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors0
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples0
Residue-Based Natural Language Adversarial Attack DetectionCode0
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case0
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks0
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition0
Adversarial Neon Beam: A Light-based Physical Attack to DNNs0
Zero-Query Transfer Attacks on Context-Aware Object Detectors0
Exploring Frequency Adversarial Attacks for Face Forgery Detection0
Boosting Black-Box Adversarial Attacks with Meta Learning0
Text Adversarial Purification as Defense against Adversarial Attacks0
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies0
Enhancing Transferability of Adversarial Examples with Spatial Momentum0
Input-specific Attention Subnetworks for Adversarial Detection0
Exploring High-Order Structure for Robust Graph Structure Learning0
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement0
Show:102550
← PrevPage 43 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified