SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11511175 of 1808 papers

TitleStatusHype
Input-specific Attention Subnetworks for Adversarial Detection0
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning0
Defense Against Explanation Manipulation0
Adversarial Attack against Cross-lingual Knowledge Graph Alignment0
An Actor-Critic Method for Simulation-Based Optimization0
AdvCodeMix: Adversarial Attack on Code-Mixed Data0
Disrupting Deep Uncertainty Estimation Without Harming AccuracyCode0
Generating Watermarked Adversarial Texts0
Covariate Balancing Methods for Randomized Controlled Trials Are Not Adversarially Robust0
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations0
Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning0
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information0
Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction ModelsCode0
Adversarial Attacks on Gaussian Process BanditsCode0
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Meme Stock Prediction0
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools0
Identification of Attack-Specific Signatures in Adversarial Examples0
A Framework for Verification of Wasserstein Adversarial Robustness0
Adversarial Attack across Datasets0
Compressive Sensing Based Adaptive Defence Against Adversarial Images0
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware DetectionCode0
Adversarial Attack by Limited Point Cloud Surface Modifications0
Adversarial Attacks on Spiking Convolutional Neural Networks for Event-based VisionCode0
Reversible Attack based on Local Visual Adversarial Perturbation0
A Uniform Framework for Anomaly Detection in Deep Neural NetworksCode0
Show:102550
← PrevPage 47 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified