SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14261450 of 1808 papers

TitleStatusHype
Near Optimal Adversarial Attacks on Stochastic Bandits and Defenses with Smoothed Responses0
A New Perspective on Stabilizing GANs training: Direct Adversarial TrainingCode0
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization0
Improving adversarial robustness of deep neural networks by using semantic information0
Model Robustness with Text Classification: Semantic-preserving adversarial attacks0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
Visual Attack and Defense on Text0
Stabilizing Deep Tomographic Reconstruction0
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks0
Physical Adversarial Attack on Vehicle Detector in the Carla Simulator0
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs0
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data0
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning0
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing0
From Sound Representation to Model Robustness0
Adversarial Privacy-preserving FilterCode0
T-BFA: Targeted Bit-Flip Adversarial Weight AttackCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
DDR-ID: Dual Deep Reconstruction Networks Based Image Decomposition for Anomaly Detection0
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense0
Accelerated Stochastic Gradient-free and Projection-free MethodsCode0
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack0
Generating Adversarial Inputs Using A Black-box Differential Technique0
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs0
On Data Augmentation and Adversarial Risk: An Empirical Analysis0
Show:102550
← PrevPage 58 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified